Magnification control in winner relaxing neural gas

Jens Christian Claussen, Thomas Villmann*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

An important goal in neural map learning, which can conveniently be accomplished by magnification control, is to achieve information optimal coding in the sense of information theory. In the present contribution we consider the winner relaxing approach for the neural gas network. Originally, winner relaxing learning is a slight modification of the self-organizing map learning rule that allows for adjustment of the magnification behavior by an a priori chosen control parameter. We transfer this approach to the neural gas algorithm. The magnification exponent can be calculated analytically for arbitrary dimension from a continuum theory, and the entropy of the resulting map is studied numerically confirming the theoretical prediction. The influence of a diagonal term, which can be added without impacting the magnification, is studied numerically. This approach to maps of maximal mutual information is interesting for applications as the winner relaxing term only adds computational cost of same order and is easy to implement. In particular, it is not necessary to estimate the generally unknown data probability density as in other magnification control approaches.

Original languageEnglish
Pages (from-to)125-137
Number of pages13
JournalNeurocomputing
Volume63
Issue numberSPEC. ISS.
DOIs
Publication statusPublished - 1 Jan 2005

Bibliographical note

Copyright © 2004 Elsevier B.V. All rights reserved.

Keywords

  • Magnification control
  • Neural gas
  • Self-organizing maps
  • Vector quantization

Fingerprint

Dive into the research topics of 'Magnification control in winner relaxing neural gas'. Together they form a unique fingerprint.

Cite this