Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.686304
Title: Regression-based estimation of pain and facial expression intensity
Author: Kaltwang, Sebastian
ISNI:       0000 0004 5918 4967
Awarding Body: Imperial College London
Current Institution: Imperial College London
Date of Award: 2015
Availability of Full Text:
Access through EThOS:
Full text unavailable from EThOS. Please try the link below.
Access through Institution:
Abstract:
Human inner feelings and psychological states like pain are subjective states that cannot be directly measured, but can be estimated from non-verbal behaviour such as spontaneous facial expressions. Since these expressions are typically characterized by subtle movements of facial parts, analysis of the facial details is required. The contribution of this thesis is two-fold. First, we propose a novel set of Bayesian regression-based learning methods for intensity estimation of facial expressions. Second, we create and publicly release the first multi-modal database of patients experiencing chronic pain, in order to facilitate further research into machine learning for automated analysis of pain. We formulate three novel regression methods for continuous estimation of the intensity of facial expressions of pain and facial muscle groups (AUs). The first regression model treats the observed face holistically and estimates the intensity of target expressions using the framework of Relevance Vector Machine (RVM) and the newly proposed fusion of the shape and appearance features. This is the first method in the field that addresses automated continuous intensity estimation of facial expressions of pain. We then extend this approach to the Doubly Sparse RVM (DSRVM) that automatically learns the importance of various facial parts for the target task at hand. DSRVM achieves this by enforcing double sparsity by jointly selecting the most relevant training examples (a.k.a. relevance vectors) and the most important kernels associated with the informative facial parts for estimation of facial expression intensity. This advances prior work on multiple-kernel learning, where the kernel sparsity is typically ignored. leading to improved intensity estimation performance over existing MKL methods, and the state-of-the-art methods for intensity estimation of pain and AUs. Lastly, we introduce a regression-based approach that jointly learns the inter-dependence of facial parts and multiple AU or pain targets. This is accomplished by a newly formulated latent tree (LT) model, that efficiently learns a hidden inference structure between features and targets. The proposed approach is the first that addresses the joint estimation of continuous intensity of multiple AU outputs in a principled manner. We show that this joint approach achieves better intensity estimation of AUs compared to existing methods, especially in the presence of noisy inputs. The proposed regression methods have been evaluated on two established datasets of naturalistic facial expressions, i.e., DISFA and ShoulderPain, and our newly created dataset, named EmoPain. The new database consists of spontaneously displayed pain-related facial expressions and body movements recorded by multiple modalities, while patients with chronic back-pain were performing instructed physical exercises. Facial expression videos have been annotated frame-wise in terms of the continuous pain intensity. We empirically demonstrated the advantages of using the proposed local methods, which model the face explicitly as the sum of its parts. We empirically demonstrated on all three datasets the advantages of using the proposed methods for intensity estimation of facial expressions. We empirically show that the proposed methods, which model the face explicitly as the sum of its parts, outperform the existing state-of-the-art methods for the target tasks. This supports the findings in psychology research which suggest that only components of expressions rather than the holistic face play the key role in interpretation of human facial expression interpretation, and, in particular, its intensity estimation.
Supervisor: Pantic, Maja Sponsor: European Commission
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.686304  DOI: Not available
Share: