CUED Publications database

Two problems with variational expectation maximisation for time-series models

Turner, RE and Sahani, M (2011) Two problems with variational expectation maximisation for time-series models. In: Bayesian Time series models. Cambridge University Press, pp. 109-130.

Full text not available from this repository.


Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.

Item Type: Book Section
Divisions: Div F > Computational and Biological Learning
Depositing User: Cron job
Date Deposited: 16 Jul 2015 14:32
Last Modified: 25 Nov 2015 11:28