Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820539
Title: Local features for view matching across independently moving cameras
Author: Xompero, Alessio
ISNI:       0000 0004 9355 7114
Awarding Body: Queen Mary University of London
Current Institution: Queen Mary, University of London
Date of Award: 2020
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
Moving platforms, such as wearable and robotic cameras, need to recognise the same place observed from different viewpoints in order to collaboratively reconstruct a 3D scene and to support augmented reality or autonomous navigation. However, matching views is challenging for independently moving cameras that directly interact with each other due to severe geometric and photometric differences, such as viewpoint, scale, and illumination changes, can considerably decrease the matching performance. This thesis proposes novel, compact, local features that can cope with with scale and viewpoint variations. We extract and describe an image patch at different scales of an image pyramid by comparing intensity values between learnt pixel pairs (binary test), and employ a cross-scale distance when matching these features. We capture, at multiple scales, the temporal changes of a 3D point, as observed in the image sequence of a camera, by tracking local binary descriptors. After validating the feature-point trajectories through 3D reconstruction, we reduce, for each scale, the sequence of binary features to a compact, fixed-length descriptor that identifies the most frequent and the most stable binary tests over time. We then propose XC-PR, a cross-camera place recognition approach that stores locally, for each uncalibrated camera, spatio-temporal descriptors, extracted at a single scale, in a tree that is selectively updated, as the camera moves. Cameras exchange descriptors selected from previous frames within an adaptive temporal window and with the highest number of local features corresponding to the descriptors. The other camera locally searches and matches the received descriptors to identify and geometrically validate a previously seen place. Experiments on different scenarios show the improved matching accuracy of the joint multi-scale extraction and temporal reduction through comparisons of different temporal reduction strategies, as well as the cross-camera matching strategy based on Bag of Binary Words, and the application to several binary descriptors. We also show that XC-PR achieves similar accuracy but faster, on average, than a baseline consisting of an incremental list of spatio-temporal descriptors. Moreover, XC-PR achieves similar accuracy of a frame-based Bag of Binary Words approach adapted to our approach, while avoiding to match features that cannot be informative, e.g. for 3D reconstruction.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.820539  DOI: Not available
Share: