Use this URL to cite or link to this record in EThOS:
Title: Facial analytics for emotional state recognition
Author: Papazachariou, Konstantinos
ISNI:       0000 0004 6422 5989
Awarding Body: University of Strathclyde
Current Institution: University of Strathclyde
Date of Award: 2017
Availability of Full Text:
Access from EThOS:
Access from Institution:
For more than 75 years, social scientists study the human emotions. Whereas numerous theories developed about the provenance and number of basic emotions, most agreed that they could categorize into six categories: angrer, disgust, fear, joy, sadness and surprise. To evaluate emotions, psychologists focused their research in facial expressions analysis. In recent years, the progress in digital technologies field has steered the researchers in psychology, computer science, linguistics, neuroscience, and related disciplines towards the usage of computer systems that analyze and detect the human emotions. Usually, these algorithms are referred in the literature as facial emotion recognition (FER) systems. In this thesis, two different approaches are described and evaluated in order to recognize the six basic emotions automatically from still images. An effective face detection scheme, based on color techniques and the well-known Viola and Jones (VJ) algorithm is proposed for the face and facial characteristics localization within an image. A novel algorithm which exploits the eyes’ centers coordinates, is applied on the image to align the detected face. In order to reduce the effects of illumination, homomorphic filtering is applied on the face area. Three regions (mouth, eyes and glabella) are localized and further processed for texture analysis. Although many methods have been proposed in the literature to recognize the emotion from the human face, they are not designed to be able to handle partial occlusions and multiple faces. Therefore, a novel algorithm that extracts information through texture analysis, from each region of interest, is evaluated. Two popular techniques (histograms of oriented gradients and local binary patterns) are utilized to perform texture analysis in the abovementioned facial patches. By evaluating several combinations of their principal parameters and two classification techniques (support vector machine and linear discriminant analysis), three classifiers are proposed. These three models are enabled depending on the regions’ availability. Although both classification approaches have shown impressive results, LDA proved to be slightly better especially regarding the amount of data management. Therefore, the final models, which utilized for comparison purpose, were trained using LDA classification. Experiments using Cohn-Kanade plus (CK+) and Amsterdam Dynamic Facial Expression Set (ADFES) datasets demonstrate that the presented FER algorithm has surpassed other significant FER systems in terms of processing time and accuracy. The evaluation of the system involved three experiments: intra-testing experiment (train and test with the same dataset), train/test process between CK+ and ADFES and finally the development of a new database based on selfie-photos, which is tested on the pre-trained models. The last two experiments constitute a certain evidence that Emotion Recognition System (ERS) can operate under various pose and light circumstances.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral