Use this URL to cite or link to this record in EThOS:
Title: Mental representation in visual/haptic crossmodal memory
Author: Lacey, Simon
Awarding Body: Nottingham Trent University
Current Institution: Southampton Solent University
Date of Award: 2005
Availability of Full Text:
Access from EThOS:
Access from Institution:
It is an unresolved question whether the mental representations that enable visual/haptic crossmodal memory for objects are modality-specific - either visual (Zhang et al., 2004) and/or haptic (Reed et al., 2004); modality-independent - either abstract (Easton et al., 1997) or structural but still in some way abstract (Reales & Ballesteros, 1999); or dual code - visual for unfamiliar objects but visual and verbal for familiar objects (Johnson et al., 1989). The thesis argues that dual-code representation can be parsimoniously reduced to visual representation, with verbal processes relegated to strategic roles, and that visual representation can be reduced to spatial representation. Spatial representation can be defined as containing information about size, shape and the arrangement of different object parts and features relative to each other, and is a novel hypothesis in visual/haptic crossmodal memory. Seven experiments tested existing theories and the novel hypothesis primarily through the innovative use of interference techniques. These experiments showed that there was no evidence for strictly abstract representation or for the main predictions of the dual-code account. There was no effect of interference on familiar objects and it is suggested that these are either resistant to interference because they involve deep long-term memory representations or that they are represented through an associative network of different representations. The novel hypothesis of spatial representation was supported in experiments that contrasted visual and haptic, spatial and non-spatial interference. These showed strongly that the modality of the interference was irrelevant and that spatiality was the key factor. Whether it occurred during encoding or retrieval, spatial interference disrupted performance regardless of its modality and also disrupted the visual-haptic and haptic-visual conditions equally. The thesis concludes that visual/haptic crossmodal memory is enabled by modality-independent spatial representation. This new finding is an original and theoretically important contribution because it specifies the format of a modality-independent representation and solves two of the three main task constraints: how any kind of object can be represented via both vision and touch. It is also a generative source of hypotheses about the third constraint: why error is systematically greater in the haptic-visual condition than the visual-haptic condition.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available
Keywords: Psychology