Hall, J and Rasmussen, CE and MacIejowski, J (2011) Reinforcement learning with reference tracking control in continuous state spaces. Proceedings of the IEEE Conference on Decision and Control. pp. 6019-6024. ISSN 0191-2216Full text not available from this repository.
The contribution described in this paper is an algorithm for learning nonlinear, reference tracking, control policies given no prior knowledge of the dynamical system and limited interaction with the system through the learning process. Concepts from the field of reinforcement learning, Bayesian statistics and classical control have been brought together in the formulation of this algorithm which can be viewed as a form of indirect self tuning regulator. On the task of reference tracking using a simulated inverted pendulum it was shown to yield generally improved performance on the best controller derived from the standard linear quadratic method using only 30 s of total interaction with the system. Finally, the algorithm was shown to work on the simulated double pendulum proving its ability to solve nontrivial control tasks. © 2011 IEEE.
|Divisions:||Div F > Computational and Biological Learning|
Div F > Control
|Depositing User:||Cron Job|
|Date Deposited:||15 Dec 2015 12:57|
|Last Modified:||12 Feb 2016 02:46|