Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.694929
Title: Video-based situation assessment for road safety
Author: Mohammad, Mahmud Abdulla
ISNI:       0000 0004 5993 3761
Awarding Body: Cardiff University
Current Institution: Cardiff University
Date of Award: 2016
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
In recent decades, situational awareness (SA) has been a major research subject in connection with autonomous vehicles and intelligent transportation systems. Situational awareness concerns the safety of road users, including drivers, passengers, pedestrians and animals. Moreover, it holds key information regarding the nature of upcoming situations. In order to build robust automatic SA systems that sense the environment, a variety of sensors, such as global positioning systems, radars and cameras, have been used. However, due to the high cost, complex installation procedures and high computational load of automatic situational awareness systems, they are unlikely to become standard for vehicles in the near future. In this thesis, a novel video-based framework for the automatic assessment of risk of collision in a road scene is proposed. The framework uses as input the video from a monocular video camera only, avoiding the need for additional, and frequently expensive, sensors. The framework has two main parts: a novel ontology tool for the assessment of risk of collision, and semantic feature extraction based on computervision methods. The ontology tool is designed to represent the various relations between the most important risk factors, such as risk from object and road environmental risk. The semantic features related to these factors iii Abstract iv are based on computer vision methods, such as pedestrian detection and tracking, road-region detection and road-type classi�cation. The quality of these methods is important for achieving accurate results, especially with respect to video segmentation. This thesis, therefore, proposes a new criterion of high-quality video segmentation: the inclusion of temporal-region consistency. On the basis of the new criteria, an online method for the evaluation of video segmentation quality is proposed. This method is more consistent than the state-of-the-art method in terms of perceptual-segmentation quality, for both synthetic and real video datasets. Furthermore, using the Gaussian mixture model for video segmentation, one of the successful video segmentation methods in this area, new online methods for both road-type classi�cation and road-region detection are proposed. The proposed vision-based road-type classi�cation method achieves higher classi�cation accuracy than the state-of-the-art method, for each road type individually. Consequently, it achieves higher overall classi- �cation accuracy. Likewise, the proposed vision-based road-region detection method achieves high performance accuracy compared to the state-of-the-art methods, according to two measures: pixel-wise percentage accuracy and area under the receiver operating characteristic (ROC) curve (AUC). Finally, the evaluation performance of the automatic risk-assessment framework is measured. At this stage, the framework includes only the assessment of pedestrian risk in the road scene. Using the semantic information obtained via computer-vision methods, the framework's performance is assessed for two datasets: �rst, a new dataset proposed in Chapter 7, which comprises six videos, and second, a dataset comAbstract v prising �ve examples selected from an established, publicly available dataset. Both datasets consist of real-world videos illustrating pedestrian movement. The experimental results show that the proposed framework achieves high accuracy in the assessment of risk resulting from pedestrian behaviour in road scenes.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.694929  DOI: Not available
Keywords: TK Electrical engineering. Electronics Nuclear engineering
Share: