Use this URL to cite or link to this record in EThOS:
Title: Sparse nonlinear methods for predicting structured data
Author: Morris, Henry
Awarding Body: Imperial College London
Current Institution: Imperial College London
Date of Award: 2012
Availability of Full Text:
Access from EThOS:
Access from Institution:
Gaussian processes are now widely used to perform key machine learning tasks such as nonlinear regression and classification. An attractive feature of Gaussian process models is the behaviour of the error bars, which grow in regions away from observations where there is high uncertainty about the interpolating function. The complexity of these models scales as O(N3) with sample size, which causes difficulties with large data sets. The goals of this work are to develop nonlinear, nonparametric modelling techniques for structure learning and prediction problems in which there are structured dependencies among the observed data, and to equip our models with sparse representations which serve both to handle prior sparse connectivity assumptions and to reduce computational complexity. We present Kernel Dynamical Structure Learning, a Bayesian method for learning the structure of interactions between variables in multivariate time-series. We design a mutual information kernel to handle time-series trajectories, and show that prior knowledge about network sparsity can be incorporated using heavy-tailed priors over parameters. We evaluate the feasibility of our method on synthetic data, and extend the inference methodology to the handling of uncertain input data. Next, we tackle the problem of belief propagation in Bayesian networks with nonlinear node relations. We propose an exact moment-matching approach for nonlinear belief propagation in any tree-structured graph. We call this Gaussian Process Belief Propagation. We extend this approach by the addition of hidden variables which allow nodes sharing common influences to be conditionally independent. This constitutes a novel approach to multi-output regression on bivariate graph structures, and we call this Dependent Gaussian Process Belief Propagation. We describe sparse inference methods for both models, which reduce computational by learning compact parameterisations of the available training data. We then apply our method to the real-world systems biology problem of protein inference in transcriptional networks.
Supervisor: Ghanem, Moustafa ; Guo, Yi-Ke Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral