Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.528709
Title: Learning deformable models for tracking human motion
Author: Baumberg, Adam
Awarding Body: University of Leeds
Current Institution: University of Leeds
Date of Award: 1996
Availability of Full Text:
Access through EThOS:
Access through Institution:
Abstract:
The analysis and automatic interpretation of images containing moving non-rigid objects, such as walking people, has been the subject of considerable research in the field of computer vision and pattern recognition. In order to build fast and reliable systems some kind of prior model is generally required. A model enables the system to cope with situations where there is considerable background clutter or where information is missing from the image data. This may be due to imaging errors (e.g. blurring due to motion) or due to part of an object becoming hidden from view. Conventional approaches to the problem of tracking non-rigid objects require complex hand-crafted models which are not easily adapted to different problems. A more recent approach uses training information to build models for image analysis. This thesis extends this approach by building flexible 2D models, automatically, from sequences of training images. Efficient methods are described for using the resulting models for real time contour tracking using optimal linear filtering techniques. The method is further extended by incorporating a feedback scheme to generate a more compact linear model which is shown to be more robust and accurate for tracking. Models of the shape of an object do not utilise the temporal information contained within the training sequences. A novel method is described for automatically learning a spatiotemporal, physically-based model that allows the system to accurately predict the expected change in object shape over time. This approach is shown to increase the reliability of the system, requiring only a modest increase in computational processing. The system can be automatically trained on video sequences to learn constraints on the apparent shape and motion of a particular non-rigid object in a particular environment. Results show the system is capable of tracking several walking pedestrians in real time without the use of expensive dedicated hardware. The output from this system has potential uses in the areas of surveillance, animation and gait analysis.
Supervisor: Hogg, D. C. Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.528709  DOI: Not available
Share: