Use this URL to cite or link to this record in EThOS:
Title: Reinforcement learning for context-specific image analysis and understanding
Author: Wang, Lichao
Awarding Body: Imperial College London
Current Institution: Imperial College London
Date of Award: 2012
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
With increasing throughput of medical imaging modalities, automatic image analysis and segmentation play an important role in both clinical diagnosis and therapy. Within the medical image computing community, this has been pursued for many years since the early days of digital imaging yet it is still far from being perfect. This is due to not only the diversity of different imaging modalities but also the complexity of anatomy and difficulty in maintaining consistent image quality. Although in certain imaging modalities such as Computed Tomography (CT) with fully calibrated pixel values, automatic image analysis techniques have enjoyed a greater degree of success, their general development remains limited. Nevertheless, there is a myriad of segmentation algorithms being developed for speci c image analysis tasks. One of the major reasons for the lack of robust analysis platforms that are generalisable is, in fact, related to knowledge acquisition and representation. Early techniques tended to use ad hoc rules to incorporate prior knowledge combined with application specific parameter tuning, whereas recent algorithms rely on statistical models of large population datasets for both model building and training. For almost all the platforms developed so far, once the system is built, it remains rigid although human adjustment or correction is often carried out to rectify the errors involved. This is important to satisfy the legal and quality assurance requirements of clinical applications. For most systems, the rigidity of the algorithm design means that the same processing error remains persistent, involving repeated user interaction until the next software release. Practically, satisfying a diverse range of clinical requirements is difficult and incorporating specific knowledge of unseen pathological cases during software development is impractical. To overcome these problems, there is increasing interest recently in developing systems that can incrementally learn domain-specific knowledge from human observers such that the algorithm can adapt to different segmentation requirements. The purpose of this thesis is to propose a general context-specific segmentation framework using reinforcement learning to capture domain experts' knowledge during image segmentation. Specific issues related to ensemble-learning-based visual-saliency extraction, reinforcement learning, the use of incremental mixture models for on-line model update, and the use of eye tracking for implicit knowledge acquisition are addressed. Detailed validation and performance comparison to the current state-of-the-art are carried out on synthetic, natural-scene and medical images. Different from most existing techniques, the algorithms proposed in this thesis build models based on both the underlying image features and interactive user behaviour. As a result, they are able to implicitly extract additional information related to the image-analysis tasks. The quality of the tasks, at the same time, remains under the control of the user. The potential clinical value of the methods is demonstrated through detailed validation with both synthetic and in situ data.
Supervisor: Yang, Guang-Zhong Sponsor: CORDA
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available