Daubigney, L and Gašić, M and Chandramohan, S and Geist, M and Pietquin, O and Young, S (2011) Uncertainty management for on-line optimisation of a POMDP-based large-scale spoken dialogue system. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. pp. 1301-1304. ISSN 1990-9772Full text not available from this repository.
The optimization of dialogue policies using reinforcement learning (RL) is now an accepted part of the state of the art in spoken dialogue systems (SDS). Yet, it is still the case that the commonly used training algorithms for SDS require a large number of dialogues and hence most systems still rely on artificial data generated by a user simulator. Optimization is therefore performed off-line before releasing the system to real users. Gaussian Processes (GP) for RL have recently been applied to dialogue systems. One advantage of GP is that they compute an explicit measure of uncertainty in the value function estimates computed during learning. In this paper, a class of novel learning strategies is described which use uncertainty to control exploration on-line. Comparisons between several exploration schemes show that significant improvements to learning speed can be obtained and that rapid and safe online optimisation is possible, even on a complex task. Copyright © 2011 ISCA.
|Uncontrolled Keywords:||Reinforcement learning Spoken dialogue systems|
|Divisions:||Div F > Machine Intelligence|
|Depositing User:||Cron Job|
|Date Deposited:||07 Mar 2014 12:12|
|Last Modified:||16 Dec 2014 19:06|