CUED Publications database

Meta-learning for stochastic gradient MCMC

Gong, W and Li, Y and Hernández-Lobato, JM (2019) Meta-learning for stochastic gradient MCMC. 7th International Conference on Learning Representations, ICLR 2019.

Full text not available from this repository.


© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has become increasingly popular for simulating posterior samples in large-scale Bayesian modeling. However, existing SG-MCMC schemes are not tailored to any specific probabilistic model, even a simple modification of the underlying dynamical system requires significant physical intuition. This paper presents the first meta-learning algorithm that allows automated design for the underlying continuous dynamics of an SG-MCMC sampler. The learned sampler generalizes Hamiltonian dynamics with state-dependent drift and diffusion, enabling fast traversal and efficient exploration of energy landscapes. Experiments validate the proposed approach on learning tasks with Bayesian fully connected neural networks, Bayesian convolutional neural networks and Bayesian recurrent neural networks, showing that the learned sampler out-performs generic, hand-designed SG-MCMC algorithms, and generalizes to different datasets and larger architectures.

Item Type: Article
Uncontrolled Keywords: stat.ML stat.ML cs.LG
Divisions: Div F > Computational and Biological Learning
Depositing User: Cron Job
Date Deposited: 13 Jun 2018 20:06
Last Modified: 18 Aug 2020 12:46