Use this URL to cite or link to this record in EThOS:
Title: A visual, knowledge-based robot tracking system
Author: Khoo, B. E.
Awarding Body: University of Wales Swansea
Current Institution: Swansea University
Date of Award: 1998
Availability of Full Text:
Access from EThOS:
The use of robots is important in many areas of industrial automation. For reliable and safe operation, accurate and precise control of their activities is vital. The current drawback is, however the lack of accurate of position sensors, that are not subject to load variations and wear. Hence there is a need to seek an accurate external sensor that is not subject to such problems. Such external position determination is also important for safety and fault recovery considerations. For example, the sharing of working environments between humans and robots implies that the robot must work inside a tight working envelope. Position sensing is an essential input for determining the actual working enveloped occupied by the robot. The position being sensed and, better still, the predicted position it is moving into, will enable early detection of possible abnormal situations. Also, if the robot does fail, knowledge of its precise position would be valuable in aiding rapid fault recovery. Approaches to position detection have usually suggested the use of markers attached to the robot. These, however, can cause some load problems, besides needing special, additional equipment. This thesis proposes an approach to recovering the position of a robot visually, using a single, stationary camera, without the need for any special markers on the robot. The system exploits knowledge of the robot being monitored by employing a model-based approach. The basic principle is to find the values of state variables of a robot model that described the appearance of the robot from the scene captured on the image. The state variables here are the joint angles of the robot being monitored. The proposed model-based tracking system consists of various components. The feature extraction determines the robot features from the captured image, and a matching module finds possible correspondences between the extracted features and the modelled robot features. The correspondences thus obtained are then used by a state estimation component to recover the states that describe the position of the joints. A tracking module then uses the position that has been recovered, and a robot motion model, to predict the future positions of the joints. The proposed system was tested statically and dynamically with a number of image sequences captured off-line. The accuracy achieved was 5 cm in static mode and within 10 pixels when dynamically tracking.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available