Abstract
An important goal in neural map learning, which can conveniently be accomplished by magnification control, is to achieve information optimal coding in the sense of information theory. In the present contribution we consider the winner relaxing approach for the neural gas network. Originally, winner relaxing learning is a slight modification of the self-organizing map learning rule that allows for adjustment of the magnification behavior by an a priori chosen control parameter. We transfer this approach to the neural gas algorithm. The magnification exponent can be calculated analytically for arbitrary dimension from a continuum theory, and the entropy of the resulting map is studied numerically confirming the theoretical prediction. The influence of a diagonal term, which can be added without impacting the magnification, is studied numerically. This approach to maps of maximal mutual information is interesting for applications as the winner relaxing term only adds computational cost of same order and is easy to implement. In particular, it is not necessary to estimate the generally unknown data probability density as in other magnification control approaches.
Original language | English |
---|---|
Pages (from-to) | 125-137 |
Number of pages | 13 |
Journal | Neurocomputing |
Volume | 63 |
Issue number | SPEC. ISS. |
DOIs | |
Publication status | Published - 1 Jan 2005 |
Bibliographical note
Copyright © 2004 Elsevier B.V. All rights reserved.Keywords
- Magnification control
- Neural gas
- Self-organizing maps
- Vector quantization