A Bayesian approach to on-line learning

Manfred Opper, Ole Winther

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Online learning is discussed from the viewpoint of Bayesian statistical inference. By replacing the true posterior distribution with a simpler parametric distribution, one can define an online algorithm by a repetition of two steps: An update of the approximate posterior, when a new example arrives, and an optimal projection into the parametric family. Choosing this family to be Gaussian, we show that the algorithm achieves asymptotic efficiency. An application to learning in single layer neural networks is given.
Original languageEnglish
Title of host publicationOn-line learning in neural networks
EditorsDavid Saad
Place of PublicationCambridge
PublisherCambridge University Press
Pages363-378
Number of pages16
ISBN (Print)0262194163
DOIs
Publication statusPublished - Jan 1999

Publication series

NamePublications of the Newton Institute
PublisherCambridge University Press
Volume17

Fingerprint

Neural networks

Bibliographical note

Copyright of Cambridge University Press Available on Google Books

Keywords

  • Online learning
  • Bayesian statistical inference
  • asymptotic efficiency
  • neural networks

Cite this

Opper, M., & Winther, O. (1999). A Bayesian approach to on-line learning. In D. Saad (Ed.), On-line learning in neural networks (pp. 363-378). (Publications of the Newton Institute; Vol. 17). Cambridge: Cambridge University Press. https://doi.org/10.2277/0521652634
Opper, Manfred ; Winther, Ole. / A Bayesian approach to on-line learning. On-line learning in neural networks. editor / David Saad. Cambridge : Cambridge University Press, 1999. pp. 363-378 (Publications of the Newton Institute).
@inbook{871c6579c7e14fa19571ab1b4e6f6f22,
title = "A Bayesian approach to on-line learning",
abstract = "Online learning is discussed from the viewpoint of Bayesian statistical inference. By replacing the true posterior distribution with a simpler parametric distribution, one can define an online algorithm by a repetition of two steps: An update of the approximate posterior, when a new example arrives, and an optimal projection into the parametric family. Choosing this family to be Gaussian, we show that the algorithm achieves asymptotic efficiency. An application to learning in single layer neural networks is given.",
keywords = "Online learning, Bayesian statistical inference, asymptotic efficiency, neural networks",
author = "Manfred Opper and Ole Winther",
note = "Copyright of Cambridge University Press Available on Google Books",
year = "1999",
month = "1",
doi = "10.2277/0521652634",
language = "English",
isbn = "0262194163",
series = "Publications of the Newton Institute",
publisher = "Cambridge University Press",
pages = "363--378",
editor = "David Saad",
booktitle = "On-line learning in neural networks",
address = "United Kingdom",

}

Opper, M & Winther, O 1999, A Bayesian approach to on-line learning. in D Saad (ed.), On-line learning in neural networks. Publications of the Newton Institute, vol. 17, Cambridge University Press, Cambridge, pp. 363-378. https://doi.org/10.2277/0521652634

A Bayesian approach to on-line learning. / Opper, Manfred; Winther, Ole.

On-line learning in neural networks. ed. / David Saad. Cambridge : Cambridge University Press, 1999. p. 363-378 (Publications of the Newton Institute; Vol. 17).

Research output: Chapter in Book/Report/Conference proceedingChapter

TY - CHAP

T1 - A Bayesian approach to on-line learning

AU - Opper, Manfred

AU - Winther, Ole

N1 - Copyright of Cambridge University Press Available on Google Books

PY - 1999/1

Y1 - 1999/1

N2 - Online learning is discussed from the viewpoint of Bayesian statistical inference. By replacing the true posterior distribution with a simpler parametric distribution, one can define an online algorithm by a repetition of two steps: An update of the approximate posterior, when a new example arrives, and an optimal projection into the parametric family. Choosing this family to be Gaussian, we show that the algorithm achieves asymptotic efficiency. An application to learning in single layer neural networks is given.

AB - Online learning is discussed from the viewpoint of Bayesian statistical inference. By replacing the true posterior distribution with a simpler parametric distribution, one can define an online algorithm by a repetition of two steps: An update of the approximate posterior, when a new example arrives, and an optimal projection into the parametric family. Choosing this family to be Gaussian, we show that the algorithm achieves asymptotic efficiency. An application to learning in single layer neural networks is given.

KW - Online learning

KW - Bayesian statistical inference

KW - asymptotic efficiency

KW - neural networks

UR - http://www.cambridge.org/uk/catalogue/catalogue.asp?isbn=0521652634

U2 - 10.2277/0521652634

DO - 10.2277/0521652634

M3 - Chapter

SN - 0262194163

T3 - Publications of the Newton Institute

SP - 363

EP - 378

BT - On-line learning in neural networks

A2 - Saad, David

PB - Cambridge University Press

CY - Cambridge

ER -

Opper M, Winther O. A Bayesian approach to on-line learning. In Saad D, editor, On-line learning in neural networks. Cambridge: Cambridge University Press. 1999. p. 363-378. (Publications of the Newton Institute). https://doi.org/10.2277/0521652634