CUED Publications database

Scalable discrete sampling as a multi-armed bandit problem

Chen, Y and Ghahramani, Z (2016) Scalable discrete sampling as a multi-armed bandit problem. 33rd International Conference on Machine Learning, ICML 2016, 5. pp. 3691-3707.

Full text not available from this repository.

Abstract

© 2016 by the author(s). Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computa-tional burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms w ith theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.copyright

Item Type: Article
Subjects: UNSPECIFIED
Divisions: Div F > Computational and Biological Learning
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 19:23
Last Modified: 22 Aug 2017 01:34
DOI: