The visual system combines samples from the retinal image into representations of spatially extensive textures. Local orientation signals can be pooled over a texture to estimate global orientation, with psychophysical performance improving as a function of signal area. We used a novel stimulus to investigate how orientation signals are combined over space (whether observers could ignore signalsfrom irrelevant locations), and the effect of spatial configuration on this pooling. Stimuli were 24×24 element arrays of 4c/deg log-Gabors, spaced 1 degree apart. A proportion of these elements had a coherent orientation (horizontal/vertical), with the remainder assigned random orientations.The observer’s task was to identify the global orientation. The spatial configuration of the signal was modulated by a checkerboard-like pattern of square checks containing either potential signal elements or only irrelevant noise. The distribution of signal elements within the array was manipulated by varying the size and location of these checks within a fixed-diameter stimulus. A blocked staircase procedure found the threshold coherence for identification. An ideal detector would pool over just the relevant locations (vector-averaging and filter-maxing models make identical predictions for these signal combination effects), however humans only did this for medium (5×5 to 9×9) check sizes, and for large (15×15) check sizes when the signal was placed at the fovea. For small (1×1 to 3×3) check sizes and large (15×15) peripheral checks the pooling occurred indiscriminately over relevant and irrelevant locations. These findings suggest orientation signals are combined mandatorily over short ranges and in the periphery, but flexibly otherwise.
|Number of pages||2|
|Publication status||Published - Dec 2012|
|Event||AVA Christmas Meeting - London, United Kingdom|
Duration: 18 Dec 2012 → …