Use this URL to cite or link to this record in EThOS:
Title: Advanced methods for fusion of remote sensing images and oil spill segmentation
Author: Longman, Fodio S.
ISNI:       0000 0004 8510 6006
Awarding Body: University of Sheffield
Current Institution: University of Sheffield
Date of Award: 2019
Availability of Full Text:
Access from EThOS:
Access from Institution:
Remote sensing systems on board satellites (spaceborne) or aircraft (airborne) have continued to play significant role in disaster management and mitigation, including for oil spill detection due to their ability to obtain wide area coverage images and other data from a distance. A single remotely sensed image can cover hundreds of kilometres of the earth surface enabling wider monitoring and change detection observation. When oil spill occur, remote sensing systems equipped with different sensors covering the spectral bands of the electro-magnetic spectrum are deployed to obtain images for damage assessment, scientific analysis or to ascertain spill location, amount of oil spilled and the type of oil for efficient planning, management and illegal dumping of ballasts identification for legal actions. In the design of such remote sensing systems, there are usually considerate trade-offs that are inevitable due technological limitations of such systems, resulting in spatial and spectral amendments. Panchromatic sensors for example obtain images at high spatial resolutions but lower spectral resolutions, while hyperspectral images obtain high spectral images but in lower spatial resolutions. Additionally, optical systems depend on external energy sources to obtain the images while others can acquire data irrespective of weather conditions. By combining data originating from different sources, scientists, analyst and planners can have images of higher quality than the individual images and can take advantage of the complimentary information embedded in diverse data acquired. This thesis presents a new framework for oil spill detection by combining data originating from different imaging sensors of remote sensing systems. Firstly, the new framework for oil spill segmentation utilises the fusion of images to improve image quality and to take ad-vantage of complimentary information available in the different resolutions of SAR images. The framework adopts the wavelet image fusion technique where the individual images are converted from spatial to frequency domain and decomposed to approximations and de-tail coefficients, allowing image properties to be transferred using a maximum fusion rule. While this method improves spatial resolution of images and retains colour information, it is observed that the scale of decomposition needs to be sensibly selected since smaller scales creates mosaic effects and large scale values causes loss of colour contents making it unsuitable for images with different spectral channels. To solve the problem of multi-modality in images, a Gaussian Process (GP) regression approach is utilised using a custom covariance that learns the geometry and intensity of pixels and also handles the change of support problem inherent in multi-resolution images. Established performance metrics in the literature are used to evaluate the quality of the fused images when compared with a reference data. Additionally, a qualitative and quantitative review of pansharpening methods for hyper-spectral images is carried out specifically for the oil spill detection application. The pansharpened results are compared in terms of un-mixing performance with a reference hyper-spectral image. This re-view can help researchers interested in this field of study to determine what methods are best for pansharpening and un-mixing and to answer the question of whether pansharpening improves un-mixing result. This can be extended for other applications that include weather forecasting, spectral analysis etc. Lastly, the a new covariance kernel is developed to solve image fusion problems in multi-band images by treating differently each spatial and spectral channels as input to the Gaussian process allowing different spatial and spectral pixels of the images to be learned and combined. The developed approach allows the transfer of information between different image modalities enabling local structure of high spatial resolution images that forms the base of the estimated image to be recovered. The developed fusion approaches achieves compelling enhancement when compared with the state of the art. Furthermore, segmentation is done on the fused and reference images with the developed fused image picking up more objects from the image than other methods. This can be attributed to the ability of the approach to sharpen the resolution of the spectral channels that supports pixel coordinates from high spatial image that improves edges of the image.
Supervisor: Mihaylova, Lyudmila ; Coca, Daniel Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available