Computing with infinite networks

Christopher K. I. Williams

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. This allows predictions to be made efficiently using networks with an infinite number of hidden units, and shows that, somewhat paradoxically, it may be easier to compute with infinite networks than finite ones.
    Original languageEnglish
    Title of host publicationAdvances in Neural Information Processing Systems
    EditorsM. C. Mozer, M. I. Jordan, T. Petsche
    Place of PublicationCambridge, US
    PublisherMIT
    Pages265-301
    Number of pages37
    ISBN (Print)0262100657
    Publication statusPublished - 1996
    Event10th Annual Conference on Neural Information Processing Systems, NIPS 1996 - Denver, CO, United Kingdom
    Duration: 2 Dec 19965 Dec 1996

    Publication series

    NameProceesing of the 1996 conference
    PublisherMassachusetts Institute of Technology Press (MIT Press)

    Conference

    Conference10th Annual Conference on Neural Information Processing Systems, NIPS 1996
    CountryUnited Kingdom
    CityDenver, CO
    Period2/12/965/12/96

    Fingerprint

    Neural networks

    Bibliographical note

    Copyright of the Massachusetts Institute of Technology Press (MIT Press)

    Keywords

    • neural networks
    • weight-priors
    • Gaussian process
    • sigmoidal

    Cite this

    Williams, C. K. I. (1996). Computing with infinite networks. In M. C. Mozer, M. I. Jordan, & T. Petsche (Eds.), Advances in Neural Information Processing Systems (pp. 265-301). (Proceesing of the 1996 conference). Cambridge, US: MIT.
    Williams, Christopher K. I. / Computing with infinite networks. Advances in Neural Information Processing Systems. editor / M. C. Mozer ; M. I. Jordan ; T. Petsche. Cambridge, US : MIT, 1996. pp. 265-301 (Proceesing of the 1996 conference).
    @inproceedings{1566f37c6c934294aedd6126be3bb4a5,
    title = "Computing with infinite networks",
    abstract = "For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. This allows predictions to be made efficiently using networks with an infinite number of hidden units, and shows that, somewhat paradoxically, it may be easier to compute with infinite networks than finite ones.",
    keywords = "neural networks, weight-priors, Gaussian process, sigmoidal",
    author = "Williams, {Christopher K. I.}",
    note = "Copyright of the Massachusetts Institute of Technology Press (MIT Press)",
    year = "1996",
    language = "English",
    isbn = "0262100657",
    series = "Proceesing of the 1996 conference",
    publisher = "MIT",
    pages = "265--301",
    editor = "Mozer, {M. C.} and Jordan, {M. I.} and T. Petsche",
    booktitle = "Advances in Neural Information Processing Systems",

    }

    Williams, CKI 1996, Computing with infinite networks. in MC Mozer, MI Jordan & T Petsche (eds), Advances in Neural Information Processing Systems. Proceesing of the 1996 conference, MIT, Cambridge, US, pp. 265-301, 10th Annual Conference on Neural Information Processing Systems, NIPS 1996, Denver, CO, United Kingdom, 2/12/96.

    Computing with infinite networks. / Williams, Christopher K. I.

    Advances in Neural Information Processing Systems. ed. / M. C. Mozer; M. I. Jordan; T. Petsche. Cambridge, US : MIT, 1996. p. 265-301 (Proceesing of the 1996 conference).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    TY - GEN

    T1 - Computing with infinite networks

    AU - Williams, Christopher K. I.

    N1 - Copyright of the Massachusetts Institute of Technology Press (MIT Press)

    PY - 1996

    Y1 - 1996

    N2 - For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. This allows predictions to be made efficiently using networks with an infinite number of hidden units, and shows that, somewhat paradoxically, it may be easier to compute with infinite networks than finite ones.

    AB - For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. This allows predictions to be made efficiently using networks with an infinite number of hidden units, and shows that, somewhat paradoxically, it may be easier to compute with infinite networks than finite ones.

    KW - neural networks

    KW - weight-priors

    KW - Gaussian process

    KW - sigmoidal

    UR - http://www.scopus.com/inward/record.url?scp=84898974226&partnerID=8YFLogxK

    UR - http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=3990

    M3 - Conference contribution

    SN - 0262100657

    T3 - Proceesing of the 1996 conference

    SP - 265

    EP - 301

    BT - Advances in Neural Information Processing Systems

    A2 - Mozer, M. C.

    A2 - Jordan, M. I.

    A2 - Petsche, T.

    PB - MIT

    CY - Cambridge, US

    ER -

    Williams CKI. Computing with infinite networks. In Mozer MC, Jordan MI, Petsche T, editors, Advances in Neural Information Processing Systems. Cambridge, US: MIT. 1996. p. 265-301. (Proceesing of the 1996 conference).