Use this URL to cite or link to this record in EThOS:
Title: Active visual tracking in multi-agent scenarios
Author: Wang, Yiming
ISNI:       0000 0004 7653 7971
Awarding Body: Queen Mary University of London
Current Institution: Queen Mary, University of London
Date of Award: 2018
Availability of Full Text:
Access from EThOS:
Access from Institution:
Camera-equipped robots (agents) can autonomously follow people to provide continuous assistance in wide areas, e.g. museums and airports. Each agent serves one person (target) at a time and aims to maintain its target centred on the camera's image plane with a certain size (active visual tracking) without colliding with other agents and targets in its proximity. It is essential that each agent accurately estimates the state of itself and that of nearby targets and agents over time (i.e. tracking) to perform collision-free active visual tracking. Agents can track themselves with either on-board sensors (e.g. cameras or inertial sensors) or external tracking systems (e.g. multi-camera systems). However, on-board sensing alone is not sufficient for tracking nearby targets due to occlusions in crowded scenes, where an external multi-camera system can help. To address scalability of wide-area applications and accurate tracking, this thesis proposes a novel collaborative framework where agents track nearby targets jointly with wireless ceiling-mounted static cameras in a distributed manner. Distributed tracking enables each agent to achieve agreed state estimates of targets via iteratively communicating with neighbouring static cameras. However, such iterative neighbourhood communication may cause poor communication quality (i.e. packet loss/error) due to limited bandwidth, which worsens tracking accuracy. This thesis proposes the formation of coalitions among static cameras prior to distributed tracking based on a marginal information utility that accounts for both the communication quality and the local tracking confidence. Agents move on demand when hearing requests from nearby static cameras. Each agent independently selects its target with limited scene knowledge and computes its robotic control for collision-free active visual tracking. Collision avoidance among robots and targets can be achieved by the Optimal Reciprocal Collision Avoidance (ORCA) method. To further address view maintenance during collision avoidance manoeuvres, this thesis proposes an ORCA-based method with adaptive responsibility sharing and heading-aware robotic control mapping. Experimental results show that the proposed methods achieve higher tracking accuracy and better view maintenance compared with the state-of-the-art methods.
Supervisor: Not available Sponsor: Queen Mary University of London ; Chinese Scholarship Council
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available
Keywords: visual tracking ; Electronic Engineering and Computer Science ; robotics