In computer vision, shape-from-shading is the process of recovering surface orientation from luminance changes in a scene. However, luminance changes are ambiguous in real images and may be due to shading or to reflectance changes. This ambiguity is a problem for those shape from-shading algorithms that assume uniform reflectance. Fortunately, reflectance changes are often associated with changes in other surface properties such as hue and texture. Here we present an algorithm for separating the shading and reflectance components in greyscale images based on texture variations. Our algorithm exploits the same rule as appears to be used by humans to assist in shape-from-shading tasks: luminance changes that are coincident with changes in the contrast of a visual texture are more likely to be due to reflectance changes than those that are not (Schofield et al, 2006 Vision Research 46 3462 ^ 3482). This in turn arises from the multiplicative nature of shading: contrast does not change with shading. We first estimate luminance gradients in a low-pass filtered version of the image. These gradients are then classified as `reflectance' or `shading' depending on the presence of coincident contrast changes as found from a texture-segmentation algorithm. Unfortunately, the estimated positions of texture edges do not always match exactly with their associated luminance changes. We solved this problem by introducing an edge-width estimation mechanism that provides tolerance to such mismatches. The final shading map is obtained by reintegrating the `shading' gradients, while the reflectance component is obtained by dividing the original image by the resultant shading map. The algorithm can separate shading and reflectance when a texture is present and the degree of shading is not so great as to reduce texture contrast below usable levels.