Use this URL to cite or link to this record in EThOS: | https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.756261 |
![]() |
|||||||
Title: | Building and evaluating privacy-preserving data processing systems | ||||||
Author: | Melis, Luca |
ISNI:
0000 0004 7429 2149
|
|||||
Awarding Body: | UCL (University College London) | ||||||
Current Institution: | University College London (University of London) | ||||||
Date of Award: | 2018 | ||||||
Availability of Full Text: |
|
||||||
Abstract: | |||||||
Large-scale data processing prompts a number of important challenges, including guaranteeing that collected or published data is not misused, preventing disclosure of sensitive information, and deploying privacy protection frameworks that support usable and scalable services. In this dissertation, we study and build systems geared for privacy-friendly data processing, enabling computational scenarios and applications where potentially sensitive data can be used to extract useful knowledge, and which would otherwise be impossible without such strong privacy guarantees. For instance, we show how to privately and efficiently aggregate data from many sources and large streams, and how to use the aggregates to extract useful statistics and train simple machine learning models. We also present a novel technique for privately releasing generative machine learning models and entire high-dimensional datasets produced by these models. Finally, we demonstrate that the data used by participants in training generative and collaborative learning models may be vulnerable to inference attacks and discuss possible mitigation strategies.
|
|||||||
Supervisor: | Not available | Sponsor: | Not available | ||||
Qualification Name: | Thesis (Ph.D.) | Qualification Level: | Doctoral | ||||
EThOS ID: | uk.bl.ethos.756261 | DOI: | Not available | ||||
Share: |