Use this URL to cite or link to this record in EThOS:
Title: AI governance through a transparency lens
Author: Theodorou, Andreas
Awarding Body: University of Bath
Current Institution: University of Bath
Date of Award: 2019
Availability of Full Text:
Access from EThOS:
Access from Institution:
When we interact with any object, we inevitably construct mental models to assess our relationship with the object. These determine our perceived utility of the object, our expectations of its performance, and how much trust we assign to it. Yet, the emerging behaviour of intelligent systems can often be difficult to understand by their developers, let alone by end users. Even worse, some intelligent system developers have often been using anthropomorphic and other audiovisual cues to deliberately deceive the users of their creations. This deception alongside with pop-science narratives for the creation of an 'all-powerful' AI system result in a moral confusion regarding the moral status of our intelligent artefacts. Their ability to exhibit agency or even perform 'super-human' tasks leads to many believing that they are worthy of being granted moral agency, a status given only to humans so far, or moral patiency. In this dissertation, I provide normative and descriptive arguments against granting any moral status to intelligent systems. As intelligent systems become increasingly integral parts of our societies, the need for affordable easy-to-use tools to provide transparencey, the ability to request - at any point of time or over a specific period - an accurate interpretation of the agent's status, grows. This dissertation provides the knowledge to build such tools. Two example tools, ABOD3 and ABOD3-AR, are presented here. Both of them are able to provide real-time visualisation of transparency-related information for the action-selection mechanisms of intelligent systems. User studies presented in this document demonstrate how naive and expert end users can use ABOD3 and ABOD3-AR to calibrate their mental models. In the three human-robot interaction studies presented, participants with access to real-time transparency information had not only a reduced perception of the robots as anthropomorphic, but also adjusted their expectations and trust to the system after ABOD3 provided them with an understanding of Artificial Intelligence (AI) by removing the 'scary' mystery around why 'is it behaving like that'. In addition, indicative results presented here demonstrate the advantages of implementing transparency for AI developers. Students undertaking an AI module were able to understand the AI paradigms taught and the behaviour of their agents better by using ABOD3. Furthermore, in a post-incident transparency study performed with the use of Virtual Reality technology, participants took the role of a passenger in an autonomous vehicle (AV) which makes a moral choice: crash into one of two human-looking non-playable characters (NPC). Participants were exposed to one of three conditions; a human driver, an opaque AV without any post-incident information, and a transparent AV that reported back the characteristics of NPC that influenced its decision-making process, e.g. its demographic background. When the characteristics were revealed to the participants after the incident, the autonomous vehicle was perceived as significantly more mechanical and utilitarian. Interestingly, our results also indicate that we find it harder to forgive machine-like intelligent systems compared to humans or even more anthropomorphic agents. Most importantly, the study demonstrates a need for caution when incorporating supposedly normative data, gathered through the use of text-based crowd-sourced preferences in moral-dilemmas studies, into moral frameworks used in technology. Based on the concerns that motivate this work and the results presented, I emphasise the need for policy that ensures distribution of responsibility, attribution of accountability, and inclusion of transparency as a fundamental design consideration for intelligent systems. Hence, the research outlined in this document aims to contribute to - and has successfully contributed to - the creation of policy; both soft governance, e.g. standards, and hard governance, i.e. legislation. Finally, future multi-disciplinary work is suggested to further investigate the effects of transparency on both naive and expert users. The proposed work is an extended investigation of how robot behaviour and appearance affect their utility and our overall perception of them.
Supervisor: Payne, Stephen ; Bryson, Joanna Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available