Use this URL to cite or link to this record in EThOS:
Title: Frameworks for learning from multiple tasks
Author: Stamos, Dimitris
ISNI:       0000 0004 8506 6593
Awarding Body: UCL (University College London)
Current Institution: University College London (University of London)
Date of Award: 2020
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
In this thesis we study different machine learning frameworks for learning multiple tasks together. Depending on the motivations and goals of each learning framework we investigate their computational and statistical properties from both a theoretical and experimental standpoint. The first problem we tackle is low rank matrix learning which is a popular model assumption used in MTL. Trace norm regularization is a widely used approach for learning such models. A standard optimization strategy is based on formulating the problem as one of low rank matrix factorization which, however, leads to a non-convex problem. We show that it is possible to characterize the critical points of the non-convex problem. This allows us to provide an efficient criterion to determine whether a critical point is also a global minimizer. We extend this analysis to the case in which the objective is nonsmooth. The goal of the second problem we worked on is to infer a learning algorithm that works well on a class of tasks sampled from an unknown meta-distribution. As an extension of MTL our goal here is to train on a set of tasks and perform well on future, unseen tasks. We consider a scenario in which the tasks are presented sequentially, without keeping any of their information in memory. We study the statistical properties of that proposed algorithm and prove non-asymptotic bounds for the excess transfer risk. Lastly, a common practice in ML is concatenating many different datasets and applying a learning algorithm on this new dataset. However, training on a collection of heterogeneous datasets can cause issues due to the presence of bias. In this thesis we derive a MTL framework that can jointly learn subcategories within a dataset and undo the inherent bias existing within each of them.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available