The perception of shape from shading (SFS) has been an active research topic for more than two decades, yet its quantitative description remains poorly specified. One obstacle is the variability typically found between observers during SFS tasks. In this study, we take a different view of these inconsistencies, attributing them to uncertainties associated with human SFS. By identifying these uncertainties, we are able to probe the underlying computation behind SFS in humans. We introduce new experimental results that have interesting implications for SFS. Our data favor the idea that human SFS operates in at least two distinct modes. In one mode, perceived slant is linear to luminance or close to linear with some perturbation. Whether or not the linear relationship is achieved is influenced by the relative contrasts of edges bounding the luminance variation. This mode of operation is consistent with collimated lighting from an oblique angle. In the other mode, recovered surface height is indicative of a surface under lighting that is either diffuse or collimated and frontal. Shape estimates under this mode are partially accounted for by the "dark-is-deep" rule (height ∝ luminance). Switching between these two modes appears to be driven by the sign of the edges at the boundaries of the stimulus. Linear shading was active when the boundary edges had the same contrast polarity. Dark-is-deep was active when the boundary edges had opposite contrast polarity. When both same-sign and opposite-sign edges were present, observers preferred linear shading but could adopt a combination of the two computational modes.