CUED Publications database

Ergodic Inference: Accelerate Convergence by Optimisation

Zhang, Y and Hernández-Lobato, JM Ergodic Inference: Accelerate Convergence by Optimisation. (Unpublished)

Full text not available from this repository.

Abstract

Statistical inference methods are fundamentally important in machine learning. Most state-of-the-art inference algorithms are variants of Markov chain Monte Carlo (MCMC) or variational inference (VI). However, both methods struggle with limitations in practice: MCMC methods can be computationally demanding; VI methods may have large bias. In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation. The proposed method can generate low-biased samples by increasing the length of MCMC simulation and optimising the MCMC hyper-parameters, which offers attractive balance between approximation bias and computational efficiency. We show that our method produces promising results on popular benchmarks when compared to recent hybrid methods of MCMC and VI.

Item Type: Article
Uncontrolled Keywords: cs.LG cs.LG stat.ML
Subjects: UNSPECIFIED
Divisions: Div F > Computational and Biological Learning
Depositing User: Cron Job
Date Deposited: 17 Jun 2018 20:06
Last Modified: 18 Aug 2020 12:46
DOI: