On-line learning in radial basis functions networks

Jason Freeman, David Saad

    Research output: Contribution to journalArticlepeer-review

    Abstract

    An analytic investigation of the average case learning and generalization properties of Radial Basis Function Networks (RBFs) is presented, utilising on-line gradient descent as the learning rule. The analytic method employed allows both the calculation of generalization error and the examination of the internal dynamics of the network. The generalization error and internal dynamics are then used to examine the role of the learning rate and the specialization of the hidden units, which gives insight into decreasing the time required for training. The realizable and over-realizable cases are studied in detail; the phase of learning in which the hidden units are unspecialized (symmetric phase) and the phase in which asymptotic convergence occurs are analyzed, and their typical properties found. Finally, simulations are performed which strongly confirm the analytic results.
    Original languageEnglish
    Pages (from-to)1601-1622
    Number of pages22
    JournalNeural Computation
    Volume9
    Issue number7
    Publication statusPublished - 1 Oct 1997

    Bibliographical note

    Copyright of the Massachusetts Institute of Technology Press (MIT Press)

    Keywords

    • radial basis function networks
    • error
    • network
    • internal dynamics
    • learning rate
    • hidden units

    Fingerprint

    Dive into the research topics of 'On-line learning in radial basis functions networks'. Together they form a unique fingerprint.

    Cite this