Use this URL to cite or link to this record in EThOS:
Title: Intrinsic reflectance estimation from video and shape for natural dynamic scenes
Author: Imber, James
ISNI:       0000 0004 5918 625X
Awarding Body: University of Surrey
Current Institution: University of Surrey
Date of Award: 2016
Availability of Full Text:
Access from EThOS:
Access from Institution:
Shape information has been recognised as playing a role in intrinsic image estimation since its inception. However, it is only in recent years that hints of the importance of geometry have been found in decomposing surface appearance into albedo and shading estimates. This thesis establishes the central importance of shape in intrinsic surface property estimation for static and dynamic scenes, and introduces methods for the use of approximate shape in a wide range of related problems to provide high-level constraints on shading. A key contribution is intrinsic texture estimation. This is a generalisation of intrinsic image estimation, in which appearance is processed as a function of surface position rather than pixel position. This approach has numerous advantages, in that the shape can be used to resolve occlusion, inter-reflection and attached shading as a natural part of the method. Unlike previous bidirectional texture function estimation approaches, high-quality albedo and shading textures are produced without prior knowledge of materials or lighting. Many of the concepts in intrinsic texture estimation can be extended to single-viewpoint capture for which depth information is available. Depth information greatly reduces the ambiguity of the shading estimation problem, allowing online intrinsic video to be developed for the first time. The availability of a lighting function also allows high-level temporal constraints on shading to be applied over video sequences, which previously required per-pixel correspondence between frames to be established. A number of applications of intrinsic video are investigated, including augmented reality, video stylisation and relighting, all of which run at interactive framerates. The albedo distribution of the input video is preserved, even in the case of natural scenes with complex appearance, and a globally-consistent shading estimate is obtained which remains robust over dynamic sequences. Finally, an integrated framework bridging the gaps between intrinsic image, video and texture estimation is presented for the first time. Approximate scene geometry provides a convenient means of achieving this, and is used in establishing pixel constraints between adjacent cameras, reconstructing scene lighting, and removing cast shadows and inter-reflections. This introduces a unified geometry-based approach to intrinsic image estimation and related fields, which achieves high-quality results for complex natural scenes for a wide range of capture modalities.
Supervisor: Hilton, Adrian ; Guillemaut, Jean-Yves Sponsor: Imagination Technologies Limited
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available