Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.655874
Title: VISrec! : visual-inertial sensor fusion for 3D scene reconstruction
Author: Aufderheide, Dominik
ISNI:       0000 0004 5367 8640
Awarding Body: University of Bolton
Current Institution: University of Bolton
Date of Award: 2014
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
The self-acting generation of three-dimensional models, by analysing monocular image streams from standard cameras, is one fundamental problem in the field of computer vision. A prerequisite for the scene modelling is the computation of the camera pose for the different frames of the sequence. Several techniques and methodologies have been introduced during the last decade to solve this classical Structure from Motion (SfM) problem, which incorporates camera egomotion estimation and subsequent recovery of 3D scene structure. However the applicability of those approaches to real world devices and applications is still limited, due to non-satisfactorily properties in terms of computational costs, accuracy and robustness. Thus tactile systems and laser scanners are still the predominantly used methods in industry for 3D measurements. This thesis suggests a novel framework for 3D scene reconstruction based on visual-inertial measurements and a corresponding sensor fusion framework. The integration of additional modalities, such as inertial measurements, are useful to compensate for typical problems of systems which rely only on visual information. The complete system is implemented based on a generic framework for designing Multi-Sensor Data Fusion (MSDF) systems. It is demonstrated that the incorporation of inertial measurements into a visual-inertial sensor fusion scheme for scene reconstruction (VISrec!) outperforms classical methods in terms of robustness and accuracy. It can be shown that the combination of visual and inertial modalities for scene reconstruction allows a reduction of the mean reconstruction error of typical scenes by up to 30%. Furthermore, the number of 3D feature points, which can be successfully reconstructed can be nearly doubled. In addition range and RGB-D sensors have been successfully incorporated into the VISrec! scheme proving the general applicability of the framework. By this it is possible to increase the number of 3D points within the reconstructed point cloud by a factor of five hundred if compared to standard visual SfM. Finally the applicability of the VISrec!-sensor to a specific industrial problem, in corporation with a local company, for reverse engineering of tailor-made car racing components demonstrates the usefulness of the developed system.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.655874  DOI: Not available
Share: