CUED Publications database

Variational implicit processes

Ma, C and Li, Y and Hernández-Lobato, JM (2019) Variational implicit processes. In: UNSPECIFIED pp. 7464-7482..

Full text not available from this repository.


© 36th International Conference on Machine Learning, ICML 2019. All rights reserved. We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables. IPs arc therefore highly flexible implicit priors over functions, with examples including data simulators, Bayesian neural networks and non-linear transformations of stochastic processes. A novel and efficient approximate inference algorithm for IPs, namely the variational implicit processes (VIPs), is derived using generalised wake-sleep updates. This method returns simple update equations and allows scalable hyper-parameter learning with stochastic optimization. Experiments show that VIPs return better uncertainty estimates and lower errors over existing inference methods for challenging models such as Bayesian neural networks, and Gaussian processes.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Uncontrolled Keywords: stat.ML stat.ML cs.LG
Divisions: Div F > Computational and Biological Learning
Depositing User: Cron Job
Date Deposited: 17 Oct 2018 20:05
Last Modified: 18 Aug 2020 12:46