For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. This allows predictions to be made efficiently using networks with an infinite number of hidden units, and shows that, somewhat paradoxically, it may be easier to compute with infinite networks than finite ones.
|Title of host publication||Advances in Neural Information Processing Systems|
|Editors||M. C. Mozer, M. I. Jordan, T. Petsche|
|Place of Publication||Cambridge, US|
|Number of pages||37|
|Publication status||Published - 1996|
|Event||10th Annual Conference on Neural Information Processing Systems, NIPS 1996 - Denver, CO, United Kingdom|
Duration: 2 Dec 1996 → 5 Dec 1996
|Name||Proceesing of the 1996 conference|
|Publisher||Massachusetts Institute of Technology Press (MIT Press)|
|Conference||10th Annual Conference on Neural Information Processing Systems, NIPS 1996|
|Period||2/12/96 → 5/12/96|
Bibliographical noteCopyright of the Massachusetts Institute of Technology Press (MIT Press)
- neural networks
- Gaussian process
Williams, C. K. I. (1996). Computing with infinite networks. In M. C. Mozer, M. I. Jordan, & T. Petsche (Eds.), Advances in Neural Information Processing Systems (pp. 265-301). (Proceesing of the 1996 conference). MIT.