Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.804377
Title: Amortized inference and model learning for probabilistic programming
Author: Le, Tuan Anh
Awarding Body: University of Oxford
Current Institution: University of Oxford
Date of Award: 2020
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Abstract:
Probabilistic modeling lets us infer, predict and make decisions based on incomplete or noisy data. The goal of probabilistic programming is to automate inference in probabilistic models that are expressed as probabilistic programs---programs that can draw random values and condition the resulting stochastic execution on data. The ability to define models using programming constructs such as recursion, stochastic branching, higher-order functions, and highly-developed simulation libraries allows us to more easily express and perform inference in models that have simulators, a dynamic number of latent variables, highly structured latent variables or nested probabilistic models. The key to success of probabilistic programming is efficient inference and model learning in such models. The most powerful black-box approximate inference algorithms for probabilistic programs are either sampling-based (Monte Carlo simulations) or optimization-based (variational methods). In both cases, one must re-run the inference algorithm for new data. Hence, our first goal is developing algorithms for amortized inference which, here, refers to reducing the run-time cost of inference by pre-training a neural network that maps from observed data to efficient parameters for performing fast, repeated inference. Our second goal is developing algorithms for model learning that are advantageous for two classes of models that one might typically want to write as probabilistic programs: models of sequential data and models that contain discrete latent variables. In this thesis, we discuss the general topic of Bayesian machine learning and probabilistic programming before moving on to our methodological contributions in amortized inference and model learning.
Supervisor: Wood, Frank ; Teh, Yee Whye Sponsor: Engineering and Physical Sciences Research Council ; Google
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.804377  DOI: Not available
Share: