Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.790883
Title: Governing machine learning that matters
Author: Veale, Michael
Awarding Body: UCL (University College London)
Current Institution: University College London (University of London)
Date of Award: 2019
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Abstract:
Personal data is increasingly used to augment decision-making and to build digital services, often through machine learning technologies, model-building tools which recognise and operationalise patterns in datasets. Researchers, regulators and civil society have expressed concern around how machine learning might create or reinforce social challenges, such as discrimination, or create new opacities difficult to scrutinise or challenge. This thesis examines how of machine learning systems that matter-those involved in high-stakes decision-making-are and should be gov- erned, in their technical, legal and social contexts. First, it unpacks the provisions and framework of European data protection law in relation to these social concerns and machine learning's technical characteristics. In chapter 2, how data protection and machine learning relate is presented and examined, revealing practical weaknesses and inconsistencies. In chapter 3, characteristics of machine learning that might further stress data protection law are high- lighted. The framework's implicit assumptions and resultant tensions are examined through three lenses. These stresses bring policy opportunities amidst challenges, such as the chance to make clearer trade-offs and expand the collective dimension of data protection rights. The thesis then pivots to the social dimensions of machine learning on-the- ground. Chapter 4 reports upon interviews with 27 machine learning practitioners in the public sector about how they cope with value-laden choices today, unearthing a range of tensions between practical challenges and those imagined by the 'fairness, accountability and transparency' literature in computer science. One tension between fairness and privacy is unpacked and examined in further detail in chapter 5 to demonstrate the kind of change in method and approach that might be needed to grapple with the findings of the thesis. The thesis concludes by synthesising the findings of the previous chapters, and outlines policy recommendations going forward of relevance to a range of interested parties.
Supervisor: Petersen, A. ; Finkelstein, A. Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.790883  DOI: Not available
Share: