The visual representation of texture
This research is concerned with texture: a source of visual information, that has motivated a huge amount of psychophysical and computational research. This thesis questions how useful the accepted view of texture perception is. From a theoretical point of view, work to date has largely avoided two critical aspects of a computational theory of texture perception. Firstly, what is texture? Secondly, what is an appropriate representation for texture? This thesis argues that a task dependent definition of texture is necessary, and proposes a multi-local, statistical scheme for representing texture orientation. Human performance on a series of psychophysical orientation discrimination tasks are compared to specific predictions from the scheme. The first set of experiments investigate observers' ability to directly derive statistical estimates from texture. An analogy is reported between the way texture statistics are derived, and the visual processing of spatio-luminance features. The second set of experiments are concerned with the way texture elements are extracted from images (an example of the generic grouping problem in vision). The use of highly constrained experimental tasks, typically texture orientation discriminations, allows for the formulation of simple statistical criteria for setting critical parameters of the model (such as the spatial scale of analysis). It is shown that schemes based on isotropic filtering and symbolic matching do not suffice for performing this grouping, but that the scheme proposed, base on oriented mechanisms, does. Taken together these results suggest a view of visual texture processing, not as a disparate collection of processes, but as a general strategy for deriving statistical representations of images common to a range of visual tasks.