Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.784862
Title: Parallelised Bayesian optimisation for deep learning
Author: Kekempanos, L.
ISNI:       0000 0004 7970 4077
Awarding Body: University of Liverpool
Current Institution: University of Liverpool
Date of Award: 2019
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
Training of deep neural networks (DNN) is an indispensable process in machine learning. The training process of DNNs aims to optimise the parameter values of the network, often relies on the derivative of the log-likelihoods of the underlying parameter space. As such, it is highly probable that the optimisation process to find local optimum values instead of the global ones. In addition to this, conventional approaches used for this process, such as Markov chain Monte Carlo methods, not only offer suboptimal runtime performance, but also prevent effective parallelisation due to inherent dependencies in the process. In this thesis, we consider an alternative approach to Markov chain Monte Carlo (MCMC) methods, namely the Sequential Monte Carlo (SMC) sampler, which generalises particle filters. More specifically, the thesis focuses on improving the performance and accuracy of the SMC methods, particularly in the context of fully Bayesian learning. The Radial Basis Function (RBF) network is an example of such training process based on fully Bayesian learning. In this setting, the thesis proposes a new method to train neural networks using the importance sampling and resampling. The initial comparison of the two methods reveal that the proposed methodology is worse in both terms of accuracy and performance. This lead the research to concentrate of the performance and accuracy improvements of the proposed approach. The performance analysis began with application of a new proposed, parallel and fully distributed resampling methodology, with improved time complexity than the original approach using two MapReduce frameworks, Hadoop and Spark. Results indicate that Spark is up to 25 times faster than Hadoop, while on Spark the new proposed methodology is up to 10 times faster than the original method. However, it is noticed that application of the same algorithm on Message Passing Interface (MPI) provide significantly better runtimes and is more suitable for the proposed algorithm. The accuracy analysis began with experiments illustrating that the basic Sequential Monte Carlo sampler provides worse accuracy than alternative or competitor MCMC algorithms. Three different strategies are applied on the basic Sequential Monte Carlo sampler providing better accuracy. The analysis is extended to include competitor algorithms. The exhaustive evaluation shows that the proposed approach offers superior performance and accuracy.
Supervisor: Maskell, S. ; Thiyagalingam, J. ; Goulermas, J. Y. I. Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.784862  DOI:
Share: