Abstract
We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms that are localized learning, concave-convex learning, and winner-relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case, the results hold only for the one-dimensional case.
| Original language | English |
|---|---|
| Pages (from-to) | 446-469 |
| Number of pages | 24 |
| Journal | Neural Computation |
| Volume | 18 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - 1 Feb 2006 |
Bibliographical note
© 2005 Massachusetts Institute of TechnologyFingerprint
Dive into the research topics of 'Magnification control in self-organizing maps and neural gas'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver