On-line learning with adaptive back-propagation in two-layer networks

Ansgar H.L. West, David Saad

Research output: Contribution to journalArticlepeer-review

Abstract

An adaptive back-propagation algorithm parameterized by an inverse temperature 1/T is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework, we analyse these learning algorithms in both the symmetric and the convergence phase for finite learning rates in the case of uncorrelated teachers of similar but arbitrary length T. These analyses show that adaptive back-propagation results generally in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.
Original languageEnglish
Pages (from-to)3426-3445
Number of pages20
JournalPhysical Review E
Volume56
Issue number3
DOIs
Publication statusPublished - Sept 1997

Bibliographical note

Copyright of the American Physical Society

Keywords

  • adaptive back-propagation
  • algorithm
  • inverse temperature
  • gradient descent
  • on-line learning
  • neural networks
  • learning algorithms

Fingerprint

Dive into the research topics of 'On-line learning with adaptive back-propagation in two-layer networks'. Together they form a unique fingerprint.

Cite this