Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.763326
Title: Multi-modal image processing via joint sparse representations induced by coupled dictionaries
Author: Song, Pingfan
ISNI:       0000 0004 7661 2433
Awarding Body: UCL (University College London)
Current Institution: University College London (University of London)
Date of Award: 2018
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Abstract:
Real-world image processing tasks often involve various image modalities captured by different sensors. However, given that different sensors exhibit different characteristics, such multi-modal images are typically acquired with different resolutions, different blurring kernels, or even noise levels. In view of the fact that images associated with the same scene share some attributes, such as edges, textures or other primitives, it is natural to ask whether one can improve standard image processing tasks by leveraging the availability of multimodal images. This thesis introduces a sparsity-based machine learning framework along with algorithms to address such multimodal image processing problems. In particular, the thesis introduces a new coupled dictionary learning framework that is able to capture complex relationships and disparities between different image types in a learned sparse-representation domain in lieu of the original image domain. The thesis then introduces representative applications of this framework in key multimodal image processing problems. First, the thesis considers multi-modal image super-resolution problems where one wishes to super-resolve a certain low-resolution image modality given the availability of another high-resolution image modality of the same scene. It develops both a coupled dictionary learning algorithm and a coupled super-resolution algorithm to address this task arising in [1,2]. Second, the thesis considers multi-modal image denoising problems where one wishes to denoise a certain noisy image modality given the availability of another less noisy image modality of the same scene. The thesis develops an online coupled dictionary learning algorithm and a coupled sparse denoising algorithm to address this task arising in [3,4]. Finally, the thesis considers emerging medical imaging applications where one wishes to perform multi-contrast MRI reconstruction, including guided reconstruction and joint reconstruction. We propose an iterative framework to implement coupled dictionary learning, coupled sparse denoising and k-space consistency to address this task arising in [5,6]. The proposed framework is capable of capturing complex dependencies, including both similarities and disparities among multi-modal data. This enables transferring appropriate guidance information to the target image without introducing noticeable texture-copying artifacts. Practical experiments on multi-modal images also demonstrate that the proposed framework contributes to significant performance improvement in various image processing tasks, such as multi-modal image super-resolution, denoising and multi-contrast MRI reconstruction.
Supervisor: Rodrigues, M. Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.763326  DOI: Not available
Share: