Use this URL to cite or link to this record in EThOS:
Title: Multi sensor data fusion applied to a class of autonomous land vehicles
Author: Walker, Richard James
ISNI:       0000 0001 3556 1279
Awarding Body: University of Southampton
Current Institution: University of Southampton
Date of Award: 1993
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Many applications exist for unmanned vehicles, factory maintenance, planetary exploration, in reactor inspection etc. Robotic systems will inhabit a world which will contain obstacles, these obstacles will threaten their pursuit of a successful goal. In all but the most simple and benign environment these obstacles will be in motion. The presence or location of an obstacle will not be known a priori. Therefore in order to build practical, useful robots a means of sensing the environment in order to determine traversable/non-traversable space needs to be developed. In addition, to prevent them from becoming lost, practical robots will be required to generate an estimate of where they are in the world in relation to known features, this capability is referred to as localisation. Clearly the primary sense for determining traversable spaces is sight. However current research into machine vision has produced systems that are either too slow, too specific (i.e. related to a particular problem domain rather than a general one) to too unreliable. These factors have lead to the development of an active sensor, the motion structured light sensor. This sensor solves the ill-posed problem and the problem of large data rates by illuminating the world with a laser sheet and determining 3D topography from the image of the intersection of this sheet and the world. The sensor has been developed to detect and track moving obstacles over time and has also been used as a means of vehicle localisation with respect to an a priori map. Although vision, and in particular structured light, is a useful source of topographic information, other sensors offer the ability to determine the presence of geometric features in a scene, such as ultrasonic sensors and laser rangefinders. Motivated by the desire to generate richer descriptions of world state from disparate information sources the research area of Multi Sensor Data Fusion (MSDF) is addressed. A mechanism for combining information based on the first and second order statistics available from the Kalman filter is presented. The MSDF system is applied i) in simulation to a second order plant and ii) to a laboratory based robot. This approach leads to greater accuracy of state estimation which leads to greater system robustness and robustness with respect to sensor failure / sensor error. This thesis therefore presents a method of generating more accurate estimates of state by using multiple sources of information. This enables systems to be built that are more robust, not only due to the fact that state estimates are more accurate but also due to the fact that these systems will possess mutliple redundancy through the use of multiple sensors. It is shown that the use of multiple sensors also enables the system to become more robust with respect to the poor chose of noise models required by the Kalman filter.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available
Keywords: Computer vision data in robots