A striking feature of binocular vision is that different images in the two eyes can be ‘fused’ in perception,m yet little is known about how fusion is achieved. We studied fusion and diplopia for Gaussian-blurred, horizontal edges with vertical disparity (silencing stereo vision). For a wide range of blurs B, the range of fusion is about 2.5B. If fusion linearly summed or averaged the monocular signals, we should expect fused edges to look increasingly blurred as disparity increased. In a blur-matching task, we found that this was true when the two edges were physically added (monocular control), but for dichoptic edges perceived blur was nearly invariant with disparity. We show that such fusion, preserving blur, occurs if luminance gradients are computed for each eye, and then the two Gaussian gradient profiles are combined as a contrast-weighted geometric mean. Finally, we show that this model for fusion is almost exactly equivalent to our earlier two-stage model derived from experiments on binocular and dichoptic contrast discrimination (Meese, Georgeson, and Baker, 2006 Journal of Vision). The binocular interactions proposed there can now be seen to implement the contrast-weighted geometric mean, and thus to achieve blur-preserving binocular fusion, followed by signal compression.
ECVP 2012 Abstracts