Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.782149
Title: Performance driven facial animation with blendshapes
Author: Ravikumar, Shridhar
ISNI:       0000 0004 7967 7558
Awarding Body: University of Bath
Current Institution: University of Bath
Date of Award: 2018
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
In this thesis, we address some of the open challenges in the area of Performance Driven Facial Capture and Animation, specifically with the goal of improving the fidelity of the capture results and making both the Modeling and Capture stages of the animation pipeline robust, inexpensive, automated and consumer friendly. We present an overview of the process of facial animation and specifically Performance Driven Facial Animation, including the Modeling, Capture and Retargeting stages. We then discuss the existing literature in the area in detail and weigh the pros and cons of the various approaches that have been presented over the last few decades along with the differences between them. We then present, in detail, our contributions to the Modeling stage of the pipeline in the form of automating the generation of actor specific Blendshape Models from a single scan of the actor's face or alternatively from a few images of the actor's face resulting in a pipeline that is automated and inexpensive, while being inclusive of actor specific nuances. We then present our contributions in the form of our marker-based Capture pipeline that improves upon traditional marker-based systems by incorporating additional features in the form of makeup patterns which are used to train a FACS classifier that is integrated with our Blendshape weight optimization in a hybrid fashion. We show that this leads to improved results especially in areas that are otherwise challenging to capture with markers alone. We then discuss our contributions to the markerless Capture pipeline and present our approach to track an actor's face with just a monocular RBG camera. We show that our method is able to achieve realistic results in spite of the missing information inherent in the monocular input by making use of static and dynamic prior information gleaned from existing animations from accurate 3D systems. We quantitatively evaluate our results comparing it with an approach using a monocular input without our spatial constraints and show that our results are closer to the ground-truth geometry. Finally, we present our results and conclusions and discuss future directions of research.
Supervisor: Cosker, Darren Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.782149  DOI: Not available
Keywords: Performance Driven Facial Animation ; Blendshapes ; Facial Animation ; Face ; animation
Share: