Dynamics of on-line learning in radial basis function networks

Jason Freeman, David Saad

    Research output: Contribution to journalArticlepeer-review

    Abstract

    On-line learning is examined for the radial basis function network, an important and practical type of neural network. The evolution of generalization error is calculated within a framework which allows the phenomena of the learning process, such as the specialization of the hidden units, to be analyzed. The distinct stages of training are elucidated, and the role of the learning rate described. The three most important stages of training, the symmetric phase, the symmetry-breaking phase, and the convergence phase, are analyzed in detail; the convergence phase analysis allows derivation of maximal and optimal learning rates. As well as finding the evolution of the mean system parameters, the variances of these parameters are derived and shown to be typically small. Finally, the analytic results are strongly confirmed by simulations.

    Original languageEnglish
    Pages (from-to)907-918
    Number of pages12
    JournalPhysical Review E
    Volume56
    Issue number1
    DOIs
    Publication statusPublished - 1997

    Bibliographical note

    Copyright of the American Physical Society

    Keywords

    • on-line learning
    • radial basis function network
    • neural network
    • error
    • learning process
    • symmetric phase
    • symmetry-breaking phase
    • convergence phase

    Fingerprint

    Dive into the research topics of 'Dynamics of on-line learning in radial basis function networks'. Together they form a unique fingerprint.

    Cite this