How the visual system combines the responses of its orientation and spatial frequency tuned filters is a fundamental issue. Starting with the 3-D Newtonian potential, defined by the original image coordinates and a 'dummy' scale variable, we Taylor expand this 3-D holomorphic function using cascades of first-order constraints along the direction of the maximum local gradient. In collapsing this 3-D signal representation using a 2-D Fourier transform, the resulting operator delivers the Riesz transform and also provides rules by which filters tuned to different scale and orientation may be combined. Unlike the visual system, filters derived from a Taylor series expansion are neither distributed nor efficient. We show how Taylor's expansion maybe embedded in a polar decomposition of the image signal from which both distributed and efficient signal representations can be encoded across scale and orientation by 'steering'. Radial (scale)computations are, however, largely unconstrained in our approach, which allows us to introduce a radial Mellin-like transform via complex log-exponential filters to facilitate scale-invariant computations. We implement the Mellin - Riesz transform using a distributed signal representation, after which we linearly compress across scale and orientation. Image features are then detected using Bayesian methods for model selection. We show: (i) that the inclusion of an image signal's mean reduces noise when identifying image features, and (ii) that higher-order image features (eg saddle points) can also be identified. We predict that the power invariance afforded by the compressed Mellin - Riesz transform may help to explain fundamental mechanisms that drive visual adaptation.