CUED Publications database

Bayesian Learning of Noisy Markov Decision Processes

Singh, SS and Chopin, N and Whiteley, N (2013) Bayesian Learning of Noisy Markov Decision Processes. ACM Transactions on Modeling and Computer Simulation (TOMACS), 23. pp. 1-25. ISSN 1049-3301

Full text not available from this repository.


We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.

Item Type: Article
Divisions: Div F > Signal Processing and Communications
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 19:54
Last Modified: 07 Mar 2019 14:57
DOI: 10.1145/2414416.2414420