Use this URL to cite or link to this record in EThOS:
Title: Mobile image parsing for visual clothing search, augmented reality mirror, and person identification
Author: Cushen, George
ISNI:       0000 0004 5991 9206
Awarding Body: University of Southampton
Current Institution: University of Southampton
Date of Award: 2016
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
With the emergence and growing popularity of online social networks, depth sensors (such as Kinect), smart phones /tablets, wearable devices, and augmented reality (such as Google Glass and Google Cardboard), the way in which people interact with digital media has been completely transformed. Globally, the apparel market is expected to grow at a compound annual growth rate of 5 between 2012 and 2025. Due to the huge impact for ecommerce applications, there is a growing interest in methods for clothing retrieval and outfit recommendation, especially efficient ones suitable for mobile apps. To this end, we propose a practical and efficient method for mobile visual clothing search and implement it as a smart phone app that enables the user to capture a photo of clothing of interest with their smart phone and retrieve similar clothing products that are available at nearby retailers. Furthermore, we propose an extended method where soft biometric clothing attributes are combined with anthropometrics computed from depth data for person identification and surveillance applications. This addresses the increased terrorist threat in recent years that has driven the need for non-intrusive person identification that can operate at a distance without a subject’s knowledge or collaboration. We implement the method in a wearable mobile augmented reality application based on a smart phone with Google Cardboard in order to demonstrate how a security guard could have their vision augmented to automatically identify a suspect in their field of vision. Lastly, we consider that a significant proportion of photos shared online and via apps are selfies and of dressed people in general. Hence, it is important both for consumers and for industry that systems are developed to understand the visual content in the vast datasets of networked content to aid management and perform smart analysis. To this end, this dissertation introduces an efficient technique to segment clothing in photos and recognize clothing attributes. We demonstrate with respect to the emerging augmented reality field by implementing an augmented reality mirror app for mobile tablet devices that can segment a user’s clothing in real-time and enable them to realistically see themselves in the mirror wearing variations of the clothing with different colours or graphics rendered. Empirical results show promising segmentation, recognition, and augmented reality performance.
Supervisor: Nixon, Mark Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available