Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.635202
Title: Automatic analysis of magnetic resonance images of speech articulation
Author: Raeesy, Zeynabalsadat
Awarding Body: University of Oxford
Current Institution: University of Oxford
Date of Award: 2013
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Restricted access.
Access from Institution:
Abstract:
Magnetic resonance imaging (MRI) technology has facilitated capturing the dynamics of speech production at fine temporal and spatial resolutions, thus generating substantial quantities of images to be analysed. Manual processing of large MRI databases is labour intensive and time consuming. Hence, to study articulation on large scale, techniques for automatic feature extraction are needed. This thesis investigates approaches for automatic information extraction from an MRI database of dynamic articulation. We first study the articulation by observing the pixel intensity variations in image sequences. The correspondence between acoustic segments and images is established by forced alignment of speech signals recorded during the articulation. We obtain speaker-specific typical phoneme articulations that represent general articulatory configurations in running speech. Articulation dynamics are parametrised by measuring the magnitude of change in intensities over time. We demonstrate a direct correlation between the dynamics of articulation thus measured and the energy of the generated acoustic signals. For more sophisticated applications, a parametric description of vocal tract shape is desired. We investigate different shape extraction techniques and present a framework that can automatically identify and extract the vocal tract shapes. The framework incorporates shape prior information and intensity features in recognising and delineating the shape. The new framework is a promising new tool for automatic identification of vocal tract boundaries in large MRI databases, as demonstrated through extensive assessments. The segmentation framework proposed in this thesis is, to the best of our knowledge, novel in the field of speech production. The methods investigated in this thesis facilitate automatic information extraction from images, either for studying the dynamics of articulation or for vocal tract shape modelling. This thesis advances the state-of-the-art by bringing new perspectives to studying articulation, and introducing a segmentation framework that is automatic, does not require extensive initialisation, and reports a minimum number of failures.
Supervisor: Coleman, John Sponsor: Clarendon Fund
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.635202  DOI: Not available
Keywords: Phonetics ; Biomedical engineering ; Speech Production ; Articulation ; Acoustics ; Magnetic Resonance Imaging ; Automatic Image Segmentation
Share: