Use this URL to cite or link to this record in EThOS:
Title: Automated 3D ultrasound image analysis for first trimester assessment of fetal health
Author: Ryou, Hosuk
ISNI:       0000 0004 7966 0393
Awarding Body: University of Oxford
Current Institution: University of Oxford
Date of Award: 2017
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Restricted access.
Access from Institution:
The first trimester fetal ultrasound scan is important to confirm fetal viability, to estimate the gestational age of the fetus and to detect any fetal anomalies early in pregnancy. In particular, there is growing recent clinical interest in shifting aspects of fetal health screening typically performed at the 18-22 week fetal ultrasound scan to the first trimester ultrasound scan for the benefit of the health of the mother and fetus and potentially to reduce cost within healthcare systems if pregnancy problems are picked up early. Ideally, this assessment would be based on 3D ultrasound, given the object of interest (the fetus) fits within a typical transducer field-of-view at this gestational age. The challenge then becomes one of diagnostic plane finding in a volume. In this thesis, we study, to our knowledge for the first time, how to automate the analysis of first trimester 3D fetal volume screening scans for the purpose of performing a basic fetal health assessment. The developed methods have been integrated into a software application with an interactive user interface to provide clinicians with a fully automatic tool for first trimester scan assessment. Specifically, this thesis first develops a deep learning algorithms called Fully Convolutional Network-based (FCN) approach for the localization of the whole fetus within a 3D volumetric ultrasound scan. A comparison of the developed approach is presented with a recently published hand-crafted machine learning approach. The proposed FCN-based algorithm was found to have a higher localization accuracy of the whole fetus (Intersection over Union (IoU) of 0.84 compared to 0.74 using Random Forests). Next, a multi-task FCN network architecture was designed to perform both sagittal plane extraction and the segmentation into head, body and the legs. This achieved 98.9% classification accuracy and better segmentation accuracy compared to a single-task FCN network (89.4% compared to 85.8%). Based on the localization and the segmentation result, we were able to focus attention on the fetal parts in the axial view and by using another FCN-based approach, we achieved a segmentation accuracy of 95.0% and 96.3% for the brain and abdomen, respectively. Based on the whole fetus segmentation result, the biometric plane for the brain and abdomen can be extracted including the correction of orientation misalignment. Having both the segmentation and corrected orientation, automatic plane extraction, object orientation estimation and measurement can be performed. Automatic analysis is shown to give similar results to manual assessment. Finally, automatic limb assessment is considered. This is the most challenging part of our research due to the variability in position and appearance of the objects of interest and has not been attempted before. Again, an FCN-based solution is proposed. The developed network architecture has a segmentation accuracy of 72.3%, 76.5%, 81.8% and 82.3% for left and right arms, and left and right legs, respectively and detection rate of 0.65, 0.50, 0.55 and 0.35 for left and right arms, and left and right legs, respectively. Our results are promising but suggest further technical research needs to be done on this topic before clinical utility can be explored.
Supervisor: Noble, J. Alison Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available