Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748681
Title: Eye tracking to aid fetal ultrasound image analysis
Author: Ahmed, Maryam
Awarding Body: University of Oxford
Current Institution: University of Oxford
Date of Award: 2017
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Abstract:
Current automated fetal ultrasound (US) analysis methods employ local descriptors and machine learning frameworks to identify salient image regions. This 'bottom-up' approach has limitations, as structures identified by local descriptors are not necessarily anatomically salient. In contrast, the human visual system employs a 'top-down' approach to image analysis guided primarily by image context and prior knowledge. This thesis attempts to bridge the gap between top-down and bottom-up approaches to US image analysis. We conduct eye tracking experiments to determine which local descriptors and global constraints guide the visual attention of human observers interpreting fetal US images. We then implement machine learning frameworks which mimic observers' visual search strategies for anatomical landmark localisation, standardised image plane selection, and video classification. We first developed a framework for landmark localisation in 2-D fetal abdominal US images. Informed by the eye movements of observers searching for anatomical landmarks in images, we derived a pictorial structures model which achieved mean detection accuracies of 87.2% and 83.2% for the stomach bubble and umbilical vein. We extended this framework to automate standardised imaging plane detection in 3-D fetal abdominal US volumes, achieving a mean standardised plane detection accuracy of 92.5%. We then implemented a bag-of-visual-words model for 2-D+t fetal US video clip classification. We recorded the eye movements of observers tasked with classifying videos, and trained a feed-forward neural network directly on eye tracking data to predict visually salient regions in unseen videos. This perception inspired spatiotemporal interest point operator was used within a framework for the classification of fetal US video clips, achieving 80.0% mean accuracy. This work constitutes the first demonstration that high-level constraints and visual saliency models obtained through eye tracking experiments can improve the accuracy of machine learning frameworks for US image analysis.
Supervisor: Noble, Alison Sponsor: Science and Technology Facilities Council
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.748681  DOI: Not available
Share: