Layer segmentation using hue, texture and luminance amplitude in a steerable filter framework

Andrew J Schofield, Xiaoyue Jiang, Jeremy L. Wyatt

Research output: Contribution to journalConference abstractpeer-review


Humans are able to differentiate variations in luminance due to illumination (eg shadows and shading) from those due to material properties (eg albedo). This process is sometimes referred to as layer segmentation where each layer has a distinct physical origin. Layer segmentation is recognised as a challenging, ill-posed problem in computer vision where the term ‘intrinsic image extraction’ is
preferred. Studies of human vision have suggested a number of heuristics which may suitably constrain the layer segmentation problem (Kingdom, 2008 Vision Research 48 2090-2105). We focus on just three cues: hue, texture and local luminance amplitude (the difference between the light and dark parts of a
texture pattern). Hue and texture tend to vary at material boundaries whereas local luminance amplitude varies with illumination. We propose a framework for layer segmentation based on steerable filters. Filtered components are weighted according to their correlations with the above cues. The weighted components are then used to construct illumination and reflectance images. The method operates on single images without a training phase. We have tested the method with a combination of surface types and illumination sources; it works particularly well for shaded, randomly patterned, and smoothly undulating surfaces as found on natural objects.
Original languageEnglish
Pages (from-to)46
Number of pages1
Issue number1_suppl
Publication statusPublished - 1 Aug 2010
Event33rd European Conference on Visual Perception - Lausanne, Switzerland
Duration: 22 Aug 201026 Aug 2010


Dive into the research topics of 'Layer segmentation using hue, texture and luminance amplitude in a steerable filter framework'. Together they form a unique fingerprint.

Cite this