Title:
|
The generation of depth maps via depth-from-defocus
|
The principle aim of this study was to use the concept of image defocus related to feature depth in order to develop a system capable of converting a 2-dimensional greyscale image into a 3-dimensional depth map. An advantage of this concept (known as depth-from-defocus or simply DfD) over techniques such as stereo imaging is that there is no so-called ‘correspondence problem’ where the corresponding location of a feature or landmark point must be identified in each of the stereo images. The majority – and the most successful – of previous researchers in DfD have used some variation of a ‘two-image’ technique in order to separate the contribution of the original scene features from the defocus effect. The best of those have achieved results typically in the range of 1% to 2% error in the accuracy of depth estimation. This thesis presents a single-image method of generating a high-density, highaccuracy depth map via the evaluation of the edge profiles of a projected structured light pattern. A novel technique of moving the projected pattern during the image capture stage allows the development of a 4-dimensional look-up table. This technique offers a solution to one of the last remaining problems in DfD, that of spatial variance. It also uses a technique to remove the dependence of original scene reflectance. The final solution generates a depth map of up to 240,000 spatially invariant depth estimates per scene image, with an accuracy of within ± 0.5%, over a depth range of 10 cm. The depth map is generated in a processing time of approximately 14 seconds once the images are loaded.
|