Statistical mechanics of mutual information maximization

R. Urbanczik*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed (Becker S. and Hinton G., Nature, 355 (1992) 161). By exploiting a formal analogy to supervised learning in parity machines, the theory of zero-temperature Gibbs learning for the unsupervised procedure is presented for the case that the networks are perceptrons and for the case of fully connected committees.

Original languageEnglish
Pages (from-to)685-691
Number of pages7
JournalEurophysics Letters
Volume49
Issue number5
DOIs
Publication statusPublished - Mar 2000

Bibliographical note

Copyright of EDP Sciences

Keywords

  • unsupervised learning procedure
  • networks
  • supervised learning

Fingerprint

Dive into the research topics of 'Statistical mechanics of mutual information maximization'. Together they form a unique fingerprint.

Cite this