Use this URL to cite or link to this record in EThOS:
Title: Monte Carlo algorithms for hypothesis testing and for hidden Markov models
Author: Ding, Dong
ISNI:       0000 0004 7659 1708
Awarding Body: Imperial College London
Current Institution: Imperial College London
Date of Award: 2019
Availability of Full Text:
Access from EThOS:
Access from Institution:
Monte Carlo methods are useful tools to approximate the numerical result of a problem by random sampling when its analytic solution is intractable or computationally intensive. The main focus of this work is to investigate Monte Carlo methods in two areas of inference problems: hypothesis testing and posterior analysis in a hidden Markov model (HMM). The first part of this thesis focuses on the decision of the p-value with respect to a fixed threshold via Monte Carlo simulations in a statistical hypothesis test. We wish to control the resampling risk, which is the probability of obtaining a different test decision from the true one based on the unknown p-value. We present confidence sequence method (CSM), a simple Monte Carlo testing procedure which bounds the resampling risk uniformly. CSM is useful due to its simple implementation and comparable performance to its competitors. The second part of the thesis focuses on two posterior distributions of an HMM: smoothing and parameter estimation. We apply a divide-and-conquer strategy (Lindsten et al., 2017) to develop Monte Carlo algorithms that which provide sample approximation of the target distribution. We propose an algorithm called tree-based particle smoothing algorithm (TPS) to estimate the joint smoothing distribution. We then assume an unknown parameter in the HMM, and extend TPS to approximate its posterior, which we refer to as tree-based parameter estimation algorithm (TPE). TPS and TPE both construct an auxiliary tree for recursively splitting model into sub-models. The root of the tree stands for the target distribution of the model. We propose different forms of intermediate target distributions of the sub-models associated to the non-root nodes, which are crucial to sampling quality. For the sampling process, we generate initial samples independently between the leaf nodes. Then we recursively merge these samples along the tree until reaching the root. Each merging step involves importance sampling for the (intermediate) target distribution. A more adaptive design of the algorithms and an improved accuracy compared to their competitors make them useful alternatives in practice.
Supervisor: Axel, Gandy Sponsor: Imperial College London
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral