Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.772530
Title: Predicting and improving perception performance for robotics applications
Author: Gurau, Corina
ISNI:       0000 0004 7960 0163
Awarding Body: University of Oxford
Current Institution: University of Oxford
Date of Award: 2018
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Abstract:
Perception systems are often the core component of a robotics framework as their ability to accurately interpret sensor data is essential for autonomy. The goal of this thesis is to estimate and improve the perception performance of a mobile robot across large areas of operation, particularly when there are no guarantees that the testing data distribution will match the training distribution. Such situations are prevalent for autonomous mobile robots operating outdoors under a variety of environmental conditions. This thesis explores the adaptability of vision systems by training place-specific models which outperform generic ones. We show that it is possible to train such models in a self-supervised fashion using geometric scene constraints without relying on costly image annotations. This thesis also explores the awareness that vision systems have of their own capability to make correct predictions at any given moment in time. We approach this problem from two different vantage points: firstly, through performance records which model perception performance as a function of location and appearance and, secondly, through intrinsic model uncertainty, or introspection as introduced by [Grimmett et al., 2016]. Performance records allow an autonomous agent to estimate the likelihood of making a mistake during future traversals of the same place. In a use-case scenario regarding offering or denying autonomy, we show that an agent is able to estimate when its confidence levels are low, deny autonomy, and reduce the number of perception mistakes made. Introspection refers to the ability of a model to associate an appropriate assessment of confidence with any test case. We introduce an efficient way to obtain well-calibrated and reliable uncertainty scores from neural networks. Our method is more computationally efficient than Bayesian neural networks or model ensembles which, despite being well-calibrated, are more cumbersome to train and slower to test. Additionally, we believe that we are the first to propose more introspective detectors within a state of the art object detection framework such as Faster R-CNN. This thesis proposes vision systems that are not only more accurate but also whose failures can be more reliably predicted. In doing so, we advocate practical solutions that often make use of tools specific to robotics such as additional sensing modalities or localisation maps pertaining to an autonomous vehicle, but we also touch upon machine learning techniques such as Bayesian deep learning. While striving for high accuracy remains a crucial endeavour, given the safetycritical nature of robot perception, we believe that estimating reliability, introspection, and diagnosing failure are indispensable when operating in cluttered, complex, and ever-changing environments.
Supervisor: Posner, Ingmar Sponsor: European Commission
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.772530  DOI: Not available
Share: