Investigating the determinants of temporal integration
Physiological, clinical and empirical studies suggest that visual input is functionally segregated (e.g. Livingstone and Hubel, 1988; Hubel and Livingstone, 1987; Zeki, 1973). Moreover, this functional processing results in concurrently presented feature attributes being processed and perceived at different times (Moutoussis and Zeki, 1998). However, findings from the attentional and categorisation literature call into question a fixed account of feature processing (Posner, 1980; Stelmach and Herdman, 1991; Carrasco and McElree, 2001; Oliva and Schyns; 2000; Goldstone, 1995). In particular, previous research has demonstrated a processing advantage for attended information. From this literature it seems likely that the enhanced saliency of an attribute will accelerate the processing time of this dimension and consequently should modulate any perceptual asynchrony between concurrently presented features. Moreover, if attention offers a selective processing advantage this should induce processing asynchrony between attended and unattended information across the visual field. The present research set out to examine how the visual system constructs a seemingly unified and veridical representation from this asynchronous information. Results add weight to the proposal that visual processing is not synchronous. Secondly, because this asynchrony is revealed in perception it seems that the visual system fails to account for these asynchronies. Finally, asynchrony does not appear to be fixed. Instead the experimental or attentional demands of the task seem to modulate the perceptual processing of attribute or localised information.