Abstract
An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed (Becker S. and Hinton G., Nature, 355 (1992) 161). By exploiting a formal analogy to supervised learning in parity machines, the theory of zero-temperature Gibbs learning for the unsupervised procedure is presented for the case that the networks are perceptrons and for the case of fully connected committees.
Original language | English |
---|---|
Pages (from-to) | 685-691 |
Number of pages | 7 |
Journal | Europhysics Letters |
Volume | 49 |
Issue number | 5 |
DOIs | |
Publication status | Published - Mar 2000 |
Bibliographical note
Copyright of EDP SciencesKeywords
- unsupervised learning procedure
- networks
- supervised learning