Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.745238
Title: Machine learning for high-level social behaviour
Author: Bilakhia, Sanjay
ISNI:       0000 0004 7232 6245
Awarding Body: Imperial College London
Current Institution: Imperial College London
Date of Award: 2018
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
The ability to recognize and interpret the complex displays of nonverbal behavioral cues that arise in social interaction comes naturally to humans. Indeed, the survival and flourishing of early groups of homo sapiens may have depended on this ability to share implicit social information. It is a process so innate that complex social behaviours can occur without conscious awareness, even in young babies. Though we would benefit from artificial devices having the ability to understand these nonverbal cues, it has proven an elusive goal. In this thesis we are primarily motivated by the problem of recognizing and exploiting displays of high–level social behavior, focusing on behavioural mimicry. Mimicry describes the tendency of individuals to adopt the postures, gestures and expressions of social interaction partners. We first provide a background to the phenomenon of behavioural mimicry, disambiguate it from other related phenomena in social interaction, and survey its surprisingly complex dependencies on the broader social context. We then discuss a number of methods that could be used to recognize mimicry behaviour in naturalistic interaction. We list some publicly available databases these tools could be trained on for the analysis of spontaneous instances of mimicry. We also examine the scarce prior work on recognition of naturalistic mimicry behaviour, and we discuss the challenges in automatically recognizing mimicry in spontaneous data. Subsequently we present a database of naturalistic social interactions, designed for analysis of spontaneous mimicry behaviour. This has been annotated for mimicry episodes, low-level non-verbal behavioural cues, and continuous affect. We also present a new software package for web-based annotation, AstAn, which has been extensively deployed for temporal event segmentation and continuous annotation. Collecting annotation data for high-level social affect is a difficult problem. This is due to inter-annotator variance, dependent on a variety of factors including i) the content of the data to annotate ii) the complexity of the variables to annotate, and iii) the annotators' cultures and personality traits. AstAn is the first software package to enable large-scale collection of annotations relevant to affective computing, without the costly manual distribution and management of (perhaps sensitive) data. Large-scale and cost-effective data collection can significantly help to overcome the aforementioned difficulties. We present experiments showing that prevailing methods for mimicry recognition on posed data, generalize suboptimally to spontaneous data. These include methods based on cross-correlation and dynamic time warping, which are prevalent in current work on recognition of interpersonal co-ordination, including mimicry and synchrony. We also show that popular temporal models such as recurrent neural networks, when applied in a straightforward classification approach, also find it challenging to discriminate between mimicry and non-mimicry. We expand upon these baseline results using methods adapted from work on multimodal classification. Nonlinear regression models are used to learn the relationships between the non-verbal cues from each subject. Namely, for mimicry and non-mimicry classes, we learn a set of neural networks to forecast the behaviour of each subject, given the behaviour of their counterpart. The set of networks that produces the best behavioural forecast corresponds to the predicted class. Subsequently, we investigate whether high-level social affect like mimicry, conflict, valence and arousal are uniquely displayed between individuals. Specifically, we show that for episodes of a given behavioural display such as mimicry or high-conflict, the spatiotemporal movement characteristics are unique enough to construct a "kinematic template" for that behaviour. Given an unseen episode of the same behavioural display, we can compare it against the template in order to verify identity. This is useful in verification contexts where facial appearance and geometry can change due to lighting, facial hair, facial decoration, or weight loss. We present a new method, Multi-Sequence Robust Canonical Time Warping (M-RCTW), in order to construct this subject- and behaviour-specific template. Unlike prior methods, M-RCTW can warp together multiple multivariate sequences in the presence of large non-Gaussian errors, which can occur due to e.g. tracking artefacts in naturalistic behaviour, such as those resulting from occlusions. We show on two databases of natural interaction that identity verification is possible from a number of high- and low-level behaviours, and that M-RCTW outperforms existing methods for multiple sequence warping on the task of subject verification.
Supervisor: Pantic, Maja Sponsor: Engineering and Physical Sciences Research Council
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.745238  DOI:
Share: