Use this URL to cite or link to this record in EThOS:
Title: Knowledge sharing : from atomic to parametrised context and shallow to deep models
Author: Yang, Yongxin
Awarding Body: Queen Mary University of London
Current Institution: Queen Mary, University of London
Date of Award: 2017
Availability of Full Text:
Access from EThOS:
Access from Institution:
Key to achieving more effective machine intelligence is the capability to generalise knowledge across different contexts. In this thesis, we develop a new and very general perspective on knowledge sharing that unifi es and generalises many existing methodologies, while being practically effective, simple to implement, and opening up new problem settings. Knowledge sharing across tasks and domains has conventionally been studied disparately. We fi rst introduce the concept of a semantic descriptor and a flexible neural network approach to knowledge sharing that together unify multi-task/multi-domain learning, and encompass various classic and recent multi-domain learning (MDL) and multi-task learning (MTL) algorithms as special cases. We next generalise this framework from single-output to multi-output problems and from shallow to deep models. To achieve this, we establish the equivalence between classic tensor decomposition methods, and specifi c neural network architectures. This makes it possible to implement our framework within modern deep learning stacks. We present both explicit low-rank, and trace norm regularisation solutions. From a practical perspective, we also explore a new problem setting of zero-shot domain adaptation (ZSDA) where a model can be calibrated solely based on some abstract information of a new domain, e.g., some metadata like the capture device of photos, without collecting or labelling the data.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available
Keywords: Electronic Engineering and Computer Science ; Machine learning ; knowledge sharing