Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531397
Title: Non-parametric and wide-baseline natural image and video matting
Author: Sarim, Muhammad
ISNI:       0000 0004 2697 0590
Awarding Body: University of Surrey
Current Institution: University of Surrey
Date of Award: 2010
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
In this thesis we address natural video matting which is a classic problem in video processing. Natural digital matting refers to the problem of extracting the foreground along with the proper opacity from an image or a video having a natural uncontrolled background. This allows compositing foreground objects in a different background to obtain a novel scene. This is one of the most common and important visual effect used in commercial media production. In this research we investigate the natural image and video matting problem in single and multiple views. The use of local non-parametric image statistics is investigated for natural image matting. Image patches are used to represent the local colour appearance and structures in a given image. Furthermore we present an extension of the proposed non-parametric image matting algorithm to videos. The proposed approaches are compared to different state-of-the-art techniques on a variety of synthetic and natural images and videos. The results obtained are comparable to state-of-the-art approaches for images with locally smooth regions while improved performance is observed for textured images. Our second line of research focuses on image and video matting of scenes captured by multiple wide-baseline cameras. To the best of our knowledge, the matting problem in wide-baseline images and videos has not previously been investigated. We present a novel framework based on inferential statistics to spatially propagate the user provided constraints across wide-baseline views. The approach helps to limit the user interaction to a single view. In the second stage we present an extension of the proposed wide-baseline image matting to wide-baseline videos. The framework reliably propagates the user provided constraints spatio-temporally across views and over time. To minimise the user interaction we dynamically update the information available from all views to incorporate the temporal variations in the scene caused due to intensity variations, shadows, motion blur, occlusion and shading over time. Results are presented on different indoor and outdoor scenes containing nonrigid fast foreground motion which presents a challenging matting problem for existing single view video matting techniques. Results are comparable to state-of-the-art single view video matting algorithms applied independently on each view. The proposed approach significantly reduces the manual interaction required to matte multi-view wide-baseline videos.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.531397  DOI: Not available
Share: