Dynamics of on-line learning in radial basis function networks

Jason Freeman, David Saad

Research output: Contribution to journalArticle

Abstract

On-line learning is examined for the radial basis function network, an important and practical type of neural network. The evolution of generalization error is calculated within a framework which allows the phenomena of the learning process, such as the specialization of the hidden units, to be analyzed. The distinct stages of training are elucidated, and the role of the learning rate described. The three most important stages of training, the symmetric phase, the symmetry-breaking phase, and the convergence phase, are analyzed in detail; the convergence phase analysis allows derivation of maximal and optimal learning rates. As well as finding the evolution of the mean system parameters, the variances of these parameters are derived and shown to be typically small. Finally, the analytic results are strongly confirmed by simulations.

Original languageEnglish
Pages (from-to)907-918
Number of pages12
JournalPhysical Review E
Volume56
Issue number1
DOIs
Publication statusPublished - 1997

Fingerprint

Radial Basis Function Network
learning
Learning Rate
education
Generalization Error
Specialization
Learning Process
Symmetry Breaking
broken symmetry
derivation
Neural Networks
Distinct
Unit
Learning
Simulation
simulation
Training

Bibliographical note

Copyright of the American Physical Society

Keywords

  • on-line learning
  • radial basis function network
  • neural network
  • error
  • learning process
  • symmetric phase
  • symmetry-breaking phase
  • convergence phase

Cite this

@article{b9abc87ec041435f9d20a61b206dfd9e,
title = "Dynamics of on-line learning in radial basis function networks",
abstract = "On-line learning is examined for the radial basis function network, an important and practical type of neural network. The evolution of generalization error is calculated within a framework which allows the phenomena of the learning process, such as the specialization of the hidden units, to be analyzed. The distinct stages of training are elucidated, and the role of the learning rate described. The three most important stages of training, the symmetric phase, the symmetry-breaking phase, and the convergence phase, are analyzed in detail; the convergence phase analysis allows derivation of maximal and optimal learning rates. As well as finding the evolution of the mean system parameters, the variances of these parameters are derived and shown to be typically small. Finally, the analytic results are strongly confirmed by simulations.",
keywords = "on-line learning, radial basis function network, neural network, error, learning process, symmetric phase, symmetry-breaking phase, convergence phase",
author = "Jason Freeman and David Saad",
note = "Copyright of the American Physical Society",
year = "1997",
doi = "10.1103/PhysRevE.56.907",
language = "English",
volume = "56",
pages = "907--918",
journal = "Physical Review E",
issn = "1539-3755",
publisher = "American Physical Society",
number = "1",

}

Dynamics of on-line learning in radial basis function networks. / Freeman, Jason; Saad, David.

In: Physical Review E, Vol. 56, No. 1, 1997, p. 907-918.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Dynamics of on-line learning in radial basis function networks

AU - Freeman, Jason

AU - Saad, David

N1 - Copyright of the American Physical Society

PY - 1997

Y1 - 1997

N2 - On-line learning is examined for the radial basis function network, an important and practical type of neural network. The evolution of generalization error is calculated within a framework which allows the phenomena of the learning process, such as the specialization of the hidden units, to be analyzed. The distinct stages of training are elucidated, and the role of the learning rate described. The three most important stages of training, the symmetric phase, the symmetry-breaking phase, and the convergence phase, are analyzed in detail; the convergence phase analysis allows derivation of maximal and optimal learning rates. As well as finding the evolution of the mean system parameters, the variances of these parameters are derived and shown to be typically small. Finally, the analytic results are strongly confirmed by simulations.

AB - On-line learning is examined for the radial basis function network, an important and practical type of neural network. The evolution of generalization error is calculated within a framework which allows the phenomena of the learning process, such as the specialization of the hidden units, to be analyzed. The distinct stages of training are elucidated, and the role of the learning rate described. The three most important stages of training, the symmetric phase, the symmetry-breaking phase, and the convergence phase, are analyzed in detail; the convergence phase analysis allows derivation of maximal and optimal learning rates. As well as finding the evolution of the mean system parameters, the variances of these parameters are derived and shown to be typically small. Finally, the analytic results are strongly confirmed by simulations.

KW - on-line learning

KW - radial basis function network

KW - neural network

KW - error

KW - learning process

KW - symmetric phase

KW - symmetry-breaking phase

KW - convergence phase

UR - http://www.scopus.com/inward/record.url?scp=0006832266&partnerID=8YFLogxK

UR - http://prola.aps.org/pdf/PRE/v56/i1/p907_1

U2 - 10.1103/PhysRevE.56.907

DO - 10.1103/PhysRevE.56.907

M3 - Article

VL - 56

SP - 907

EP - 918

JO - Physical Review E

JF - Physical Review E

SN - 1539-3755

IS - 1

ER -