Mirror-image relations in category learning

Martin Jüttner*, Ingo Rentschler

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


The discrimination of patterns that are mirror-symmetric counterparts of each other is difficult and requires substantial training. We explored whether mirror-image discrimination during expertise acquisition is based on associative learning strategies or involves a representational shift towards configural pattern descriptions that permit resolution of symmetry relations. Subjects were trained to discriminate between sets of unfamiliar grey-level patterns in two conditions, which either required the separation of mirror images or not. Both groups were subsequently tested in a 4-class category-learning task employing the same set of stimuli. The results show that subjects who had successfully learned to discriminate between mirror-symmetric counterparts were distinctly faster in the categorization task, indicating a transfer of conceptual knowledge between the two tasks. Additional computer simulations suggest that the development of such symmetry concepts involves the construction of configural, protoholistic descriptions, in which positions of pattern parts are encoded relative to a spatial frame of reference.

Original languageEnglish
Pages (from-to)211-237
Number of pages27
JournalVisual Cognition
Issue number2
Publication statusPublished - Feb 2007

Bibliographical note

This is an electronic version of an article published in Jüttner, Martin and Rentschler, Ingo (2007). Mirror-image relations in category learning. Visual Cognition, 15 (2), pp. 211-237. Visual Cognition is available online at: http://www.informaworld.com/openurl?genre=article&issn=1464-0716&volume=15&issue=2&spage=211


  • categorization
  • learning
  • pattern recognition
  • mirror image
  • holistic
  • configural
  • expertise
  • symmetry


Dive into the research topics of 'Mirror-image relations in category learning'. Together they form a unique fingerprint.

Cite this