We consider the problem of on-line gradient descent learning for general two-layer neural networks. An analytic solution is presented and used to investigate the role of the learning rate in controlling the evolution and convergence of the learning process.
|Number of pages||7|
|Journal||Advances in Neural Information Processing Systems|
|Publication status||Published - 1996|
Bibliographical noteCopyright of Massachusetts Institute of Technology Press (MIT Press) http://mitpress.mit.edu/mitpress/copyright/default.asp
- gradient descent learning
- general two-layer neural networks
- learning rate
- learning process.