Magnification control in self-organizing maps and neural gas

Thomas Villmann, Jens Christian Claussen

Research output: Contribution to journalArticle

Abstract

We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms that are localized learning, concave-convex learning, and winner-relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case, the results hold only for the one-dimensional case.

Original languageEnglish
Pages (from-to)446-469
Number of pages24
JournalNeural Computation
Volume18
Issue number2
DOIs
Publication statusPublished - 1 Feb 2006

Fingerprint

Gases
Learning
Self-Control
Self-organizing Map
Gas

Bibliographical note

© 2005 Massachusetts Institute of Technology

Cite this

@article{cc8667cce10846699946d6ddddec994c,
title = "Magnification control in self-organizing maps and neural gas",
abstract = "We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms that are localized learning, concave-convex learning, and winner-relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case, the results hold only for the one-dimensional case.",
author = "Thomas Villmann and Claussen, {Jens Christian}",
note = "{\circledC} 2005 Massachusetts Institute of Technology",
year = "2006",
month = "2",
day = "1",
doi = "10.1162/089976606775093918",
language = "English",
volume = "18",
pages = "446--469",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "2",

}

Magnification control in self-organizing maps and neural gas. / Villmann, Thomas; Claussen, Jens Christian.

In: Neural Computation, Vol. 18, No. 2, 01.02.2006, p. 446-469.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Magnification control in self-organizing maps and neural gas

AU - Villmann, Thomas

AU - Claussen, Jens Christian

N1 - © 2005 Massachusetts Institute of Technology

PY - 2006/2/1

Y1 - 2006/2/1

N2 - We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms that are localized learning, concave-convex learning, and winner-relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case, the results hold only for the one-dimensional case.

AB - We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms that are localized learning, concave-convex learning, and winner-relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case, the results hold only for the one-dimensional case.

UR - http://www.scopus.com/inward/record.url?scp=33644899424&partnerID=8YFLogxK

UR - https://www.mitpressjournals.org/doi/10.1162/089976606775093918

U2 - 10.1162/089976606775093918

DO - 10.1162/089976606775093918

M3 - Article

VL - 18

SP - 446

EP - 469

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 2

ER -