Use this URL to cite or link to this record in EThOS:
Title: An optimum visual sensor configuration for terrestrial locomotion
Author: Daniels, Geoffrey Philip
ISNI:       0000 0004 5923 1663
Awarding Body: University of Bristol
Current Institution: University of Bristol
Date of Award: 2015
Availability of Full Text:
Access from EThOS:
Human technological advancement has continually created new opportunities for machinery to automate intensive tasks. However, these machines still need to be delivered and often controlled by humans. Autonomous Ground Vehicles (AGVs) can completely remove the locomotion dependency of these systems on humans, enabling a robotic revolution. The locomotive performance of AGVs is dependent on the quantity and quality of information received about the terrain ahead; for this purpose, vision is by far the most effective sense. Contextual machine vision is a new area of research where fundamental questions such as how to optimise a visual system specification for a locomotive platform to enable fast locomotion are yet to be addressed. In this thesis, abstract mathematical models of a generic vision sensor and generic locomotor platform were developed to investigate the relationship between sensor specification and locomotor performance with respect to a single key parameter, the maximum ground speed. Initially a static AGV model was investigated, before being expanded to include forward motion, thus enabling the maximum dynamic performance of an AGV to be evaluated. The vision sensor model was designed with interchangeable sensor geometries so that the performance of multiple sensor geometries could be compared. Two of the sensor geometries were designed to approximate a digital camera and human eye, while the third removed non-linearities associated with the detector. The optimum specification to enable maximum speed was defined by the geometry of the sensor. The achievable proximity to the optimum is restricted by system resolution. Generally the sensor geometries analogous to a digital camera and human eye outperformed the linearised model, however, this model can be made insensitive to sensor angle which can be advantageous. Optical flow algorithm performance was not directly effected by detector geometry. Although the resolution variation of the non-linearised detectors and locomotion context reduced tracking performance. Simulating pose error on the model with a random or systemic error resulted in the outcome that vision was a requirement for motion estimation, leading to the development of an AGV vision system for human AGVs. The performance of a visually limited, controlled, human AGV, in a virtual reality environment showed a minimum of 500 features was required for good performance at a foot placement task.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available