Rasmussen, CE and Deisenroth, MP (2008) Probabilistic inference for fast learning in control. In: Recent Advances in Reinforcement Learning. Lecture Notes in Computer Science, subseries: Lecture Notes in Artificial Intelligence . Springer, pp. 229-242. ISBN 9783540897217Full text not available from this repository.
A novel framework is provided for very fast model-based reinforcement learning in continuous state and action spaces. It requires probabilistic models that explicitly characterize their levels of condence. Within the framework, exible, non-parametric models are used to describe the world based on previously collected experience. It demonstrates learning on the cart-pole problem in a setting where very limited prior knowledge about the task has been provided. Learning progressed rapidly, and a good policy found after only a small number of iterations.
|Item Type:||Book Section|
|Additional Information:||Recent Advances in Reinforcement Learning contains revised and selected papers from the 8th European Workshop on Reinforcement Learning, EWRL 2008, Villeneuve d'Ascq, France, 30 June - 3 July 2008. Series ISSN 0302-9743 (Print) 1611-3349 (Online)|
|Divisions:||Div F > Computational and Biological Learning|
|Depositing User:||Cron Job|
|Date Deposited:||28 Oct 2011 17:05|
|Last Modified:||11 Mar 2013 02:03|
Actions (login required)