CUED Publications database

'In-Between' Uncertainty in Bayesian Neural Networks

Foong, AYK and Li, Y and Hernández-Lobato, JM and Turner, RE 'In-Between' Uncertainty in Bayesian Neural Networks. (Unpublished)

Full text not available from this repository.


We describe a limitation in the expressiveness of the predictive uncertainty estimate given by mean-field variational inference (MFVI), a popular approximate inference method for Bayesian neural networks. In particular, MFVI fails to give calibrated uncertainty estimates in between separated regions of observations. This can lead to catastrophically overconfident predictions when testing on out-of-distribution data. Avoiding such overconfidence is critical for active learning, Bayesian optimisation and out-of-distribution robustness. We instead find that a classical technique, the linearised Laplace approximation, can handle 'in-between' uncertainty much better for small network architectures.

Item Type: Article
Uncontrolled Keywords: stat.ML stat.ML cs.AI cs.LG
Divisions: Div F > Computational and Biological Learning
Depositing User: Cron Job
Date Deposited: 16 Jul 2019 01:07
Last Modified: 18 Aug 2020 12:57