CUED Publications database

Black-Box α-divergence minimization

Hernández-Lobato, JM and Li, Y and Rowland, M and Hernández-Lobato, D and Bui, TD and Ttarner, RE (2016) Black-Box α-divergence minimization. 33rd International Conference on Machine Learning, ICML 2016, 4. pp. 2256-2273.

Full text not available from this repository.

Abstract

© 2016 by the author(s). Black-box alpha (BB-α) is a new approximate inference method based on the minimization of Q-divergences. BB-α scales to large datasets because it can be implemented using stochastic gradient descent. BB-α can be applied to complex probabilistic models with little effort since it only requires as input the likelihood function and its gradients. These gradients can be easily obtained using automatic differentiation. By changing the divergence parameter α, the method is able to interpolate between variational Bayes (VB) (α - > 0) and an algorithm similar to expectation propagation (EP) (α = 1). Experiments on probit regression and neural network regression and classification problems show that BB-a with non-standard settings of α, such as α = 0.5, usually produces better predictions than with α 0 (VB) or α = 1 (EP).

Item Type: Article
Subjects: UNSPECIFIED
Divisions: Div F > Computational and Biological Learning
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 19:27
Last Modified: 21 Nov 2017 03:05
DOI: