Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.585337
Title: The nature of the representations underlying verbal behaviour : the interaction between auditory, visual and motor modalities
Author: Maidment, David William
Awarding Body: Cardiff University
Current Institution: Cardiff University
Date of Award: 2013
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
A fundamental issue in the study of verbal behaviour is whether the underpinning representation of speech, while derived from different modalities, is itself amodal. The current thesis contributes to this debate, utilising two behavioural phenomena to show that verbal performance is not simply limited to representations independent of the modality through which they were derived. Firstly, similarities in verbal short-term memory performance across presentation modalities have been explained in terms of a phonological level of representation. Namely, both auditory and visual modes of presentation demonstrate similar patterns of performance within the recency portion of the serial position curve. However, it is shown that while recall at the terminal list item for an auditory list is immune to the disruptive effect of task-irrelevant background sound and articulatory suppression, lipread recency is note immune. In addition, although the effect of an auditory suffix on an auditory list is due to the perceptual grouping of the suffix with the list, the corresponding effect with lipread speech is shown to be due to misidentification of the lexical content of the lipread suffix. Furthermore, even though a lipread suffix does not disrupt auditory recency, an auditory suffix does disrupt recency for lipread lists due to attentional capture. These findings are subsequently explained in terms of modality-specific perceptual and motor-speech output mechanisms, rather than to the storage and manipulation at some phonological level of representation. Secondly, the mechanisms underlying the integration of seen and heard speech is investigated via the McGurk effect in order to understand the stage at which auditory and visual modes of speech come to be integrated. It is shown that concurrently articulating verbal material out loud or silently mouthing speech during syllable identification reduces he McGurk effect, whereas passive listening to taskā€irrelevant speech or sequential manual tapping does not. On the basis that both concurrent articulation and silent mouthing mpede Subvocal speech production processes, that both manipulations also disrupt the McGurk effect suggests that subvocal motor mechanisms necessary for speech production are involved in audiovisual integration. Taken together, if progress is to be made in understanding the underlying representations of verbal behaviour, an approach should be adopted that not only requires an amodal, phonological representational form, but also considers the extent to which modality-specific systems primarily serving perceptual and motor processes contribute to performance.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.585337  DOI: Not available
Keywords: BF Psychology
Share: