Abstract
The dynamics of supervised learning in layered neural networks were studied in the regime where the size of the training set is proportional to the number of inputs. The evolution of macroscopic observables, including the two relevant performance measures can be predicted by using the dynamical replica theory. Three approximation schemes aimed at eliminating the need to solve a functional saddle-point equation at each time step have been derived.
Original language | English |
---|---|
Pages (from-to) | 5444-5487 |
Number of pages | 44 |
Journal | Physical Review E |
Volume | 62 |
Issue number | 4 |
DOIs | |
Publication status | Published - Oct 2000 |
Bibliographical note
Copyright of American Physical SocietyKeywords
- layered neural networks
- learning dynamics
- dynamically replica theory