Title:
|
Bayesian estimation and model comparison for mortality forecasting
|
The ability to perform mortality forecasting accurately is of considerable interest for a wide variety of applications to avoid adverse costs. The recent decline in mortality poses a major challenge to various institutions in their attempts to forecast mortality within acceptable risk margins. The ultimate aim of our project is to develop a methodology to produce accurate mortality forecasts, with carefully calibrated probabilistic intervals to quantify the uncertainty encountered during the forecasts. Bayesian methodology is mainly implemented throughout the thesis for various benefits, but primarily due to its ability to provide a coherent modelling framework. Our contributions in this thesis can be divided into several parts. Firstly, we focus on the Poisson log-bilinear model by Brouhns et al. (2002), which induces an undesirable property, the mean-variance equality. A Poisson log-normal and a Poisson gamma log bilinear models, fitted using arbitrarily dffuse priors, are presented as possible solutions. We demonstrate that properly accounting for overdispersion prevents over-fitting and offers better calibrated prediction intervals for mortality forecasting. Secondly, we carry out Bayesian model determination procedures to compare the models, using marginal likelihoods computed by bridge sampling (Meng and Wong, 1996). To achieve our goal of approximating the marginal likelihoods accurately, a series of simulation studies is conducted to investigate the behaviour of the bridge sampling estimator. Next, a structurally simpler model which postulates a log-linear relationship between the mortality rate and time is considered. To provide a fair comparison between this model and the log-bilinear model, we carry out rigorous investigations on the prior specifications to ensure consistency in terms of the prior information postulated for the models. We propose to use Laplace prior distributions on the corresponding parameters for the loglinear model. Finally, we demonstrate that the inclusion of cohort components is crucial to yield more accurate projections and to avoid unnecessarily wide prediction intervals by improving the calibration between data signals and errors.
|