CUED Publications database

Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems

Su, PH and Vandyke, D and Gašić, M and Mrkšić, N and Wen, TH and Young, S (2015) Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. In: UNSPECIFIED pp. 417-421..

Full text not available from this repository.

Abstract

© 2015 Association for Computational Linguistics. Statistical spoken dialogue systems have the attractive property of being able to be optimised from data via interactions with real users. However in the reinforcement learning paradigm the dialogue manager (agent) often requires significant time to explore the state-action space to learn to behave in a desirable manner. This is a critical issue when the system is trained on-line with real users where learning costs are expensive. Reward shaping is one promising technique for addressing these concerns. Here we examine three recurrent neural network (RNN) approaches for providing reward shaping information in addition to the primary (task-orientated) environmental feedback. These RNNs are trained on returns from dialogues generated by a simulated user and attempt to diffuse the overall evaluation of the dialogue back down to the turn level to guide the agent towards good behaviour faster. In both simulated and real user scenarios these RNNs are shown to increase policy learning speed. Importantly, they do not require prior knowledge of the user's goal.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Subjects: UNSPECIFIED
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 19:37
Last Modified: 24 Aug 2017 01:29
DOI: