Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.629540
Title: Vision & laser for road based navigation
Author: Napier, Ashley A.
ISNI:       0000 0004 5349 3080
Awarding Body: University of Oxford
Current Institution: University of Oxford
Date of Award: 2014
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Restricted access.
Access from Institution:
Abstract:
This thesis presents novel solutions for two fundamental problems associated with autonomous road driving. The first is accurate and persistent localisation and the second is automatic extrinsic sensor calibration. We start by describing a stereo Visual Odometry (VO) system, which forms the basis of later chapters. This sparse approach to ego-motion estimation leverages the efficacy and speed of the BRIEF descriptor to measure frame-to-frame correspondences and infer subsequent motion. The system is able to output locally metric trajectory estimates as demonstrated on many kilometres of data. We then present a robust vision only localisation system based on a two-stage approach. Firstly we gather a representative survey in ideal weather and lighting conditions. We then leverage locally accurate VO trajectories to synthesise a high resolution orthographic image strip of the road surface. This road image provides a highly descriptive and stable template against which to match subsequent traversals. During the second phase, localisation, we use the VO to provide high frequency pose updates, but correct for the drift inherent in all locally derived pose estimates with low frequency updates from a dense image matching technique. Here a live image stream is registered against synthesised views of the road image generated form the survey. We use an information theoretic measure, Mutual Information, to determine the alignment of live images and synthesised views. Using this measure we are able to successfully localise subsequent traversals of surveyed routes under even the most intense lighting changes expected in outdoor applications. We demonstrate our system localising in multiple environments with accuracy commensurate to that of an Inertial Navigation System. Finally we present a technique for automatically determining the extrinsic calibration between a camera and Light Detection And Ranging (LIDAR) sensor in natural scenes. Rather than requiring a stationary platform as with prior art, we actually exploit platform motion allowing us to aggregate data and adopt a retrospective approach to calibration. Coupled with accurate timing this retrospective approach allows for sensors with non-overlapping fields of view to be calibrated as long as at some point the observed workspaces overlap. We then show how we can improve the accuracy of our calibration estimates by treating each single shot estimate as a noisy measurement and fusing them together using a recursive Bayes filter. We evaluate the calibration algorithm in multiple environments and demonstrate millimetre precision in translation and deci-degrees in rotation.
Supervisor: Newman, Paul M. Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.629540  DOI: Not available
Keywords: Information engineering ; Image understanding ; Robotics ; localisation ; computer vision ; mapping ; robot navigation ; calibration ; multi-modal calibration ; life long robotics
Share: