Explaining visible behaviour
This thesis presents a novel approach to the problem of behaviour modelling within computer vision. This technique is not based upon statistical measures of typicality, but upon building an understanding of the way people navigate towards a goal. Representing movement through the scene in terms of the known goals and obstacles and interpreting people's behaviour as representative of underlying intentions enables behaviour to be explained in terms of these previously defined goals. A family of related algorithms for performing this goal-directed analysis of behaviour are presented and evaluated, alongside a number of metrics for measuring how well the computed explanation matches the observed behaviour. These measurements can be interpreted as measurements of goal-directedness or intentionality. The system is evaluated using a novel methodology which involves comparing the algorithmic output with the performance of humans engaged in a visual surveillance task. An application of this technique is demonstrated within the visual surveillance domain, providing classification of behaviour patterns as explicable or inexplicable. The advantages of such an approach are multiple: it handles the presence of movable goals (for example, parked cars) with ease, and trajectories which have never before been presented to the system can be classified as explicable. The output of the system (for example �Agent n is heading towards goal m� with an associated score indicating how good this explanation is) are easily interpreted. The systems described in this thesis could also in principle be extended to handle richer varieties of scene, moving obstacles, and more complicated systems of goals.