Singh, SS and Chopin, N and Whiteley, N (2013) Bayesian Learning of Noisy Markov Decision Processes. ACM Transactions on Modeling and Computer Simulation (TOMACS), 23. pp. 1-25. ISSN 1049-3301Full text not available from this repository.
We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.
|Divisions:||Div F > Signal Processing and Communications|
|Depositing User:||Cron Job|
|Date Deposited:||07 Mar 2014 12:13|
|Last Modified:||10 Mar 2014 18:04|
Actions (login required)