Title:
|
Towards optimality of the parallel tempering algorithm
|
Markov Chain Monte Carlo (MCMC) techniques for sampling from complex probability distributions have become mainstream. Big data and high model complexity demand more scalable and robust algorithms. A famous problem with MCMC is making it robust to situations when the target distribution is multi-modal. In such cases the algorithm can become trapped in a subset of the state space and fail to escape during the entirety of the run of the algorithm. This non-exploration of the state space results in highly biased sample output. Simulated (ST) and Parallel (PT) Tempering algorithms are typically used to address multi-modality problems. These methods flatten out the target distribution using a temperature schedule. This allows the Markov chain to move freely around the state space and explore all regions of significant mass. This thesis explores two new ideas to improve the scalability of the PT algorithm. These are implemented in prototype algorithms, QuanTA and HAT, which are accompanied by supportive theoretical optimal scaling results. QuanTA focuses on improving transfer speed of the hot state mixing information to the target cold state. The associated scaling result for QuanTA shows that under mild conditions the QuanTA approach admits a higher order temperature spacing than the PT algorithm. HAT focuses on preserving modal weight through the temperature schedule. This is an issue that can lead to critically poor performance of the PT approach. The associated optimal scaling result is useful from a practical perspective. The result also challenges the notion that without modal weight preservation tempering schedules can be selected based on swap acceptance rates; an idea repeatedly used in the current literature. The new algorithms are prototype designs and have clear limitations. However, the impressive empirical performance of these new algorithms, together with supportive theory, illustrate their substantial improvement over existing methodology.
|