Jurčíček, F and Thomson, B and Young, S (2012) Reinforcement learning for parameter estimation in statistical spoken dialogue systems. Computer Speech and Language, 26. pp. 168-192. ISSN 0885-2308Full text not available from this repository.
Reinforcement techniques have been successfully used to maximise the expected cumulative reward of statistical dialogue systems. Typically, reinforcement learning is used to estimate the parameters of a dialogue policy which selects the system's responses based on the inferred dialogue state. However, the inference of the dialogue state itself depends on a dialogue model which describes the expected behaviour of a user when interacting with the system. Ideally the parameters of this dialogue model should be also optimised to maximise the expected cumulative reward. This article presents two novel reinforcement algorithms for learning the parameters of a dialogue model. First, the Natural Belief Critic algorithm is designed to optimise the model parameters while the policy is kept fixed. This algorithm is suitable, for example, in systems using a handcrafted policy, perhaps prescribed by other design considerations. Second, the Natural Actor and Belief Critic algorithm jointly optimises both the model and the policy parameters. The algorithms are evaluated on a statistical dialogue system modelled as a Partially Observable Markov Decision Process in a tourist information domain. The evaluation is performed with a user simulator and with real users. The experiments indicate that model parameters estimated to maximise the expected reward function provide improved performance compared to the baseline handcrafted parameters. © 2011 Elsevier Ltd. All rights reserved.
|Uncontrolled Keywords:||Dialogue management POMDP Reinforcement learning Spoken dialogue systems|
|Divisions:||Div F > Machine Intelligence|
|Depositing User:||Cron job|
|Date Deposited:||04 Feb 2015 22:19|
|Last Modified:||01 May 2015 18:56|