Use this URL to cite or link to this record in EThOS:
Title: Non-rigid alignment for temporally consistent 3D video
Author: Budd, C. W.
Awarding Body: University of Surrey
Current Institution: University of Surrey
Date of Award: 2011
Availability of Full Text:
Access from EThOS:
Access from Institution:
This thesis presents methods for the temporal alignment of 3D performance capture data. Discussion of sequential tracking is presented along with introduction of a novel approach for non-rigid free-form surfaces which improves on previous approaches by combining geometric and photometric features. This reduces drift and increases reliability of tracking for complex 3D video sequences of people. Subsequently Non-sequential tracking is achieved with the introduction of a novel shape similarity tree representation. This approach combined with the sequential tracking enables alignment of an entire database of multiple 3D video sequences into a consistent mesh structure. This is the first approach to enable alignment across multiple sequences. Non-sequential alignment is also shown to reduce drift and improve reliability of surface alignment overcoming situations where sequential tracking fails. Adopting this approach to representing 3D video sequences provides a number of advantages over previous work in the area. Firstly, with a hierarchical tree representation of a sequence tracking is capable of recovering from even the worst of errors. Secondly, the tree divides the sequence into multiple tracking paths represented by the branches of the tree itself. This allows tracking of long sequences with relatively few sequential alignment steps. The number of sequential alignment steps does not grow linearly with the length of the sequence but instead grows based on the depth of the tree structure. Thirdly, automatic shape matching allows global alignment of multiple sequences of the same character giving the ability to produce databases of temporally consistent 3D video. Temporally consistent 3D performance capture is an essential step in producing an animation framework based on reconstructed video data. Currently reconstruction primarily produces a topologically different representation of each frame. Consequently editing of a sequence requires alteration of each frame within the sequence. A temporally consistent representation would allow automatic propagation of edits with techniques such as space time editing. With a fully consistent database of motions for a character it is possible to parametrise and blend between motions using techniques such as motion graphs. Thus allowing re-animation of the captured character.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available