Periscopic stereo and large-scale scene reconstruction.
This dissertation presents, for the first time, a practical design for a periscopic stereo head and investigates the computer vision tools necessary for 3D reconstruction from periscopic image data. It identifies two possibilities for processing periscope image data. "Corrected", where a two dimensional rotation is applied to the image plane prior to standard stereo processing, or, "uncorrected" which ignores the "tumbling" effect inherent in periscopic image data until the final stage of reconstruction, where the "late" correction circumvents the problem, apparent in many existing stereo algorithms, of resolving disparity measurement in imaged scene structure which is parallel with corresponding epipolar lines.
Many of the existing stereo processing tools used in the course of this research require little modification, but have all revealed issues requiring resolution not immediately apparent in previous treatments. This investigation stops short of the actual construction of 3D models but presents a method of generating the sets of depth data required for large-scale scene reconstruction. Feature extraction, image data correspondence, camera calibration and the generation of depth information from periscopic image data are all covered in the context of this dissertation. In particular a new method of combining existing camera calibration techniques, termed "calibration in a box", is presented together with conclusions regarding the tools and techniques employed.
While periscopic stereo is still in development, it is only imaging system, reported to date, which is likely to be capable of large-scale, autonomous, 3D scene reconstruction, with particular application to remote operation in hazardous environments.