Use this URL to cite or link to this record in EThOS:
Title: Bayesian single- and multi-objective optimisation with nonparametric priors
Author: Shah, Amar
ISNI:       0000 0004 8497 5111
Awarding Body: University of Cambridge
Current Institution: University of Cambridge
Date of Award: 2020
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Optimisation is integral to all sorts of processes in science, economics and arguably underpins the fruition of human intelligence through millions of years of optimisation, or 'evolution'. Scarce resources make it crucial to maximise their efficient usage. In this thesis, we consider the task of maximising unknown functions which we are able to query point-wise. The function is deemed to be 'costly' to evaluate e.g. larger run time or financial expense, requiring a judicious querying strategy given previous observations. We adopt a probabilistic framework for modelling the unknown function and Bayesian non-parametric modelling. In particular, we focus on the 'Gaussian process' (GP), a popular non-parametric Bayesian prior on functions. We motivate these choices and give an overview of the Gaussian process in the introduction, and its application to 'Bayesian optimisation'. A GP's behaviour is intimately controlled by the choice of 'kernel' or covariance function, typically chosen to be a parametric function. In chapter 2 we instead place a non-parametric Bayesian prior, known as an Inverse Wishart process prior, over a GP kernel function, and show that this may be marginalised analytically leading to a 'Student-t process' (TP). Furthermore we explore a larger class of 'elliptical processes', and show that the TP is the most general for which analytic calculation is possible, and apply it successfully for Bayesian optimisation. The remainder of the thesis focusses on various Bayesian optimisation settings. In chapter 3, we consider a setting where we are able to evaluate a function at multiple locations in parallel. Our approach is to consider a measure of information, 'entropy', to decide which batch of points to evaluate a function at next. We similarly apply information gain for 'multi-objective' Bayesian optimisation in chapter 4. Here, one wishes to find a 'Pareto frontier' of efficient settings with respect to several different objectives through sequential evaluation. Finally, in chapter 5 we exploit the idea that in a multi-objective setting, functions are 'correlated', incorporating this belief in our choice of prior distribution over the multiple objectives.
Supervisor: Ghahramani, Zoubin Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
Keywords: machine learning ; Bayesian optimisation ; Bayesian ; optimisation ; sequential decision ; single objective ; multiple objective ; Gaussian process