Animatronics : the development of a facial action sensing system to enhance performance control
This thesis presents the initial exploratory research into an original and novel technique to enhance performance control in animatronics. An animatronic system is defined as a 3-D electro-mechanically driven facial model that can move in certain ways, when controlled by a human performer to create the "illusion of life" for a viewer. The vital elements in this form of performance are the synchronisation of lip movements to an acoustic speech signal and the animation of emotive expressions. A novel optical sensing technique is proposed based on the hypothesis that the input of distinctive articulatory or emotive movements from the performer's face would provide a more 'natural' form of control. The principle that the movement of a minimal set of points at key positions on the face can produce sufficient control information to describe the overall action is proposed to achieve this hypothesis. A comprehensive investigation into human communication, including visual speech perception and non-verbal facial expression, to define the optimum set of key points is described. Conclusions are also drawn on the primary facial actions required for successful lip synchronisation. Both the theoretical and practical aspects of the realisation of a prototype system are described. A methodology is presented for the assessment of the sensing system and the overall objectives based on the design and construction of an animatronic face, of the same dimensions as the researcher's, to produce animation of the desired actions with similar displacements. Objective analysis is achieved through the comparison of measurements by the sensor system from the performer's key point movements and those of the animatronic model. Perceptual data is generated through the visual analysis of the animated facial movement. The results and analysis of the investigations are presented in the thesis. The thesis discusses results obtained which indicate that, given certain valid assumptions, the sensor system is capable of consistent facial motion detection. It can provide sufficient control for the animatronic model to produce a limited set of facial actions in a realistic manner. Results indicate the possibilities for improved lip synchronisation and, hence, "overall character" performance.