Use this URL to cite or link to this record in EThOS:
Title: Investigation of gait representations and partial body gait recognition
Author: Wattanapanich, Chirawat
ISNI:       0000 0004 7966 8088
Awarding Body: University of Reading
Current Institution: University of Reading
Date of Award: 2019
Availability of Full Text:
Access from EThOS:
Access from Institution:
Recognising an individual by the way they walk is one of the most popular research subjects within the field of soft biometrics in last few decades. The advancement of technology and equipment such as Close Circuit Television (CCTV), wireless internet and wearable sensors makes it easier to obtain gait data than ever before. The gait biometric can be used widely and in different areas such as biomedical, forensic and surveillance. However, gait recognition still has many challenges and fundamental issues. All of these problems only serve as a researcher's motivation to learn more about various gait topics to overcome the challenges and improve the field of gait recognition. Gait recognition currently has high performance when carried out under very specific conditions such as normal walking, obstruction from certain types of clothing and fixed camera view angles. When the aforementioned conditions are changed, the classification rate dramatically drops. This study aims to solve the problems of clothing, carrying objects and camera view angles within the indoor environment and video-based data collection. Two gait related databases used for testing in this study are CASIA dataset B and OU-ISIR Large population dataset with Bag (OU-LP-Bag). Three main tasks will be tested with CASIA dataset B while only gait recognition is tested with OU-LP-Bag. The gait recognition framework is developed to solve the three main tasks including gait recognition by identical view, view classification and cross view recognition. This framework uses gait images sequence as input to generate a gait compact image. Next, gait features are extracted with the optimal feature map by Principal Component Analysis (PCA) and then a linear Support Vector Machine (SVM) is used as the one-against-all multiclass classifier. Four gait compact images including Gait Energy Image (GEI), Gait Entropy Image (GEnI), Gait Gaussian Image (GGI) and the novel gait images called Gait Gaussian Entropy Image (GGEnI) are used as basic gait representations. Then three secondary gait representations are generated from these basic representations. These include Gradient Histogram Gait Image (GHGI) and two novel gait representations called Convolutional Gait Image (CGI) and Convolutional Gradient Histogram Gait Image (CGHGI). All representations are tested with three main tasks. When people walk, each body part does not have the same locomotion information, for example, there is much more motion in the leg than shoulder motion when walking. Moreover, clothing and carrying objects do not have the same level of affect to every part of the body, for example, a handbag does not generally affect leg motion. This study divides the human body into fourteen different body parts based on height. Body parts and gait representations are combined to solve the three main tasks. Three combined parts techniques which use two different parts to solve the problem are created. The fist is Part Scores Fusion (PSF) which uses the summation score of two models based on each part. The highest summation score model is chosen as the result. The second is Part Image Fusion (PIF) which concatenates two parts into a single image with a 1:1 ratio. The highest scoring model which is generated from image fusion is selected as the result. The third is Multi Region Duplication (MRD) which uses the same idea as PIF, however, the second part's ratio is increased to 1:2, 1:3 and 1:4. These techniques are tested on the gait recognition by identical view. In conclusion, the general framework is effectively for three main tasks. GHGI-GEI which is generated from full silhouette is the most effective representation for gait recognition by identical view and cross view recognition. GHGI-GGI with lower knee region is the most effective representation for view angle classification. The GHGI-GEI CPI combination between full body and limb parts is the most effective combination on OU-LP-Bag. A more detailed description of each aspect is in the following Chapters.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral