Use this URL to cite or link to this record in EThOS:
Title: Computational models of socially interactive animation
Author: Okwechime, Dumebi
ISNI:       0000 0004 2711 5422
Awarding Body: University of Surrey
Current Institution: University of Surrey
Date of Award: 2011
Availability of Full Text:
Access from EThOS:
Access from Institution:
The aim of this thesis is to investigate computational models of non-verbal social interaction for the purpose of generating synthetic social behaviour in animations. To this end, several contributions are made: A dynamic model, providing multimodal control of animation is developed and demonstrated using various data formats including motion capture and video; A social interaction model is developed, capable of predicting social context/intent such as level of interest in a conversation; and finally, the social model is used to drive the dynamic model, which animates appropriate social behaviour of a listener in a conversation in response to a speaker. A method of reusing motion captured data by learning a generative model of motion is presented. The model allows real-time synthesis and blending of motion, whilst providing it with the style and realism present in the original data set. This is achieved by projecting the data into a lower dimensional space and learning a multivariate probability distribution of the motion sequences. Functioning as a generative model, the probability density estimation is used to produce novel poses, and pre-computed motion derivatives combined with gradient based optimisation generates the animation. A new algorithm for real-time interactive motion control is introduced and demonstrated on motion captured data, pre-recorded videos and HCI. This example-based method uses the original motion data for synthesis by seamlessly combining various subsequences together. A novel approach to determining transition points is presented based on k-medoids, whereby appropriate points of intersection in the motion trajectory are derived as cluster centres. These points are used to segment the data into smaller subsequences. A transition matrix combined with a kernel density estimation is used to determine suitable transitions between the subsequences to develop novel motion. To facilitate real-time interactive control, conditional probabilities are used to derive motion given user commands. The user control can come from any modality including auditory, touch and gesture. The system is also extended to HCI using audio signals from speech in a conversation to trigger non-verbal responses from a synthetic listener in real-time. The flexibility of the method is demonstrated by presenting results ranging from data sets composed of vectorised images, 2D and 3D point representations. In order to learn the dynamics of social interaction, experiments are conducted to elicit natural social dynamics of people in a conversation. Semi-supervised computer vision techniques are then employed to extract social signals such as laughing and nodding. Learning is performed using association rule data mining to deduce frequently occurring patterns of social trends between a speaker and listener in both interested and not interested social scenarios. The confidence values from rules are utilised to build a Social Dynamics Model (SDM), that can then be used for both classification and visualisation. By visualising the rules generated in the SDM, analysing distinct social trends between an interested and not interested listener in a conversation is possible. The confidence values extracted from the mining can also be used as conditional probabilities to animate social responsive avatars. A texture motion graph is combined with the example-based animation system developed earlier within the thesis. Using the mined rules of social interaction, social signals are synthesised within the animation, providing the user with control over who speaks and the interest level of the participants.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available