Abstract
The dynamics of on-line learning is investigated for structurally unrealizable tasks in the context of two-layer neural networks with an arbitrary number of hidden neurons. Within a statistical mechanics framework, a closed set of differential equations describing the learning dynamics can be derived, for the general case of unrealizable isotropic tasks. In the asymptotic regime one can solve the dynamics analytically in the limit of large number of hidden neurons, providing an analytical expression for the residual generalization error, the optimal and critical asymptotic training parameters, and the corresponding prefactor of the generalization error decay.
Original language | English |
---|---|
Pages (from-to) | 5902-5911 |
Number of pages | 10 |
Journal | Physical Review E |
Volume | 60 |
Issue number | 5 |
DOIs | |
Publication status | Published - Nov 1999 |
Bibliographical note
Copyright of the American Physical SocietyKeywords
- on-line learning
- neural networks
- neurons
- asymptotic regime one
- residual generalization error
- asymptotic training parameters
- generalization error decay