Integration of visual and haptic feedback for teleoperation
Teleoperation systems are an important tool for performing tasks which require the sensori-motor coordination of an operator but where it is physically impossible for an operator to undertake such tasks in situ. The vast majority of these devices supply the operator with both visual and haptic sensory feedback in order that the operator can perform the task at hand as naturally and fluently as possible and as though physically present at the remote site. This thesis is concerned with overcoming the sensory limitations imposed by a fixed camera teleoperation system. The principal aim of this work is the extraction and redisplay of visual information to facilitate such a system. The thesis augments the Oxford teleoperation system with a virtual viewing module, where the operator is able to select his or her viewpoint and viewing direction onto the workcell by first tracking the locations of known objects in the workcell using a computer vision system, and then rendering them graphically on a display in front of the operator. This system, because the model-based object tracker is based around a Kalman filter, motivates the design of experiments to examine whether the operator's visuo-motor control loop maintains a state model of the manipulation process as in a Kalman filter. Experimental evidence is presented showing the latter to be false. A new operator model is then postulated using an adaptive gain controller, with the gain chosen to minimise the variance between desired and actual output. The experimental evidence supports this model. These findings support the hypothesis that the required bandwidth of the tracking filter is both i) sufficient that the tracker can robustly track hand manipulated objects and ii) matched to the visual needs of the operator.