We present an analytic solution to the problem of on-line gradient-descent learning for two-layer neural networks with an arbitrary number of hidden units in both teacher and student networks. The technique, demonstrated here for the case of adaptive input-to-hidden weights, becomes exact as the dimensionality of the input space increases.
|Title of host publication||Proceedings of the first international conference on mathematics of neural networks : models, algorithms and applications: models, algorithms and applications|
|Place of Publication||Oxford|
|Publication status||Published - 1997|
Bibliographical note© Springer Science+Business Media New York 1997