Use this URL to cite or link to this record in EThOS:
Title: Mobile robot teleoperation through eye-gaze (telegaze)
Author: Latif, H. O.
Awarding Body: Nottingham Trent University
Current Institution: Nottingham Trent University
Date of Award: 2010
Availability of Full Text:
Access from EThOS:
Access from Institution:
In most teleoperation applications the human operator is required to monitor the status of the robot, as well as, issue controlling commands for the whole duration of the operation. Using a vision based feedback system, monitoring the robot requires the operator to look at a continuous stream of images displayed on an interaction screen. The eyes of the operator therefore, are fully engaged in monitoring and the hands in controlling. Since the eyes of the operator are engaged in monitoring anyway, inputs from their gaze can be used to aid in controlling. This frees the hands of the operator, either partially or fully, from controlling which can then be used to perform any other necessary tasks. However, the challenge here lies in distinguishing between the inputs that can be used for controlling and the inputs that can be used for monitoring. In mobile robot teleoperation, controlling is mainly composed of issuing locomotion commands to drive the robot. Monitoring on the other hand, is looking where the robot goes and looking for any obstacles in the route. Interestingly, there exist a strong correlation between human's gazing behaviours and their moving intentions. This correlation has been exploited in this thesis to investigate novel means for mobile robot teleoperation through eye-gaze, which has been named TeleGaze for short. The contribution of this thesis is a well designed and extensively evaluated novel interface for TeleGaze, that enables hands-free mobile robot teleoperation. Since the interface is the only part of an interactive system that the remote user comes into direct contact, the thesis covers different phases of design, evaluation, and critical analysis of the TeleGaze interface. Three different prototypes (Native, Multimodal & Refined Multimodal) have been designed and evaluated using observational and task-oriented studies. The result is a novel interface, that interprets the gazing behaviour of the human operator into controlling commands in an intuitive manner. The interface demonstrates a comparable performance to that of a conventional joystick operated system, with the significant advantage of hands free control, for a number of mobile robot teleoperation applications; provided the limitations of calibration and drift are taken into account.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available