Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.769734
Title: Implicit human-robot interfacing in robotic surgery
Author: Gras, Gauthier
ISNI:       0000 0004 7659 1257
Awarding Body: Imperial College London
Current Institution: Imperial College London
Date of Award: 2019
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
The surgical environment presents a large number of challenges for surgeons, from the crucial planning and decision-making processes involved, to the technical skills required to complete a surgical task. To tackle these challenges, systems of increasing capabilities have been designed. Surgical robots provide enhanced dexterity under the restrictive conditions of minimally invasive surgery, and advanced imaging and registration modalities allow powerful planning and localisation capabilities for surgery. However, despite the increasing complexity and capability of modern surgical robot systems, human-robot interfaces have remained largely unchanged. Modern surgical robots possess many degrees of freedom and system-specific dexterous workspaces, yet are still mainly controlled using a static motion scaling master-slave scheme in Cartesian space. Likewise, image guidance information displayed on separate displays force interruptions of the surgical workflow when used, as the surgeon needs to take their attention away from the surgical scene. In order to fully utilise the capabilities provided by these systems, improved user-interfaces taking advantage of the context of the surgical scene need to be developed. This thesis explores the design of such human-user interfaces for two main cases: interfaces responsible for the control of robotic systems, and surgical navigation interfaces. A key motivation for this work is to generate methods for robotic control and navigation that feel transparent to the user. These interfaces should act implicitly based on the context of the surgical scene, as opposed to requiring explicit user input. Firstly, an implicit approach to control the surgical camera and a focussed energy delivery system is explored, using gaze information and no active gestures. This approach is then expanded for intention recognition, to dynamically adapt motion scaling of master-slave systems. Bayesian approaches to tune these interfaces are also studied, showing the importance of not relying on manual tuning. The methods previously developed for intention recognition are then expanded and applied to augmented reality, to provide optimal display behaviour in robotic applications. Finally, an approach is developed combining augmented reality and force feedback to provide implicit assistance for hand-held robotic instruments. Throughout this work, results from studies conducted for each of the proposed methods demonstrate the capabilities and potential clinical value of context-aware interfaces to improve performance.
Supervisor: Yang, Guang-Zhong Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.769734  DOI:
Share: