Theoretical foundations of neural networks

Christopher M. Bishop

Research output: Chapter in Book/Published conference outputChapter

Abstract

Neural networks have often been motivated by superficial analogy with biological nervous systems. Recently, however, it has become widely recognised that the effective application of neural networks requires instead a deeper understanding of the theoretical foundations of these models. Insight into neural networks comes from a number of fields including statistical pattern recognition, computational learning theory, statistics, information geometry and statistical mechanics. As an illustration of the importance of understanding the theoretical basis for neural network models, we consider their application to the solution of multi-valued inverse problems. We show how a naive application of the standard least-squares approach can lead to very poor results, and how an appreciation of the underlying statistical goals of the modelling process allows the development of a more general and more powerful formalism which can tackle the problem of multi-modality.
Original languageEnglish
Title of host publicationProceedings of Physics Computing 96
EditorsP. Borcherds, M. Bubak, A. Maksymowicz
Place of PublicationKrakow
PublisherAcademic Computer Centre
Pages500-507
Number of pages8
Publication statusPublished - 1996
EventPhysics Computing '96 -
Duration: 1 Jan 19961 Jan 1996

Conference

ConferencePhysics Computing '96
Period1/01/961/01/96

Keywords

  • neural networks
  • nervous systems
  • statistical pattern recognition
  • computational learning theory
  • statistics
  • information geometry
  • statistical mechanics

Fingerprint

Dive into the research topics of 'Theoretical foundations of neural networks'. Together they form a unique fingerprint.

Cite this