CUED Publications database

Dialogue context sensitive HMM-based speech synthesis

Tsiakoulis, P and Breslin, C and Gasic, M and Henderson, M and Kim, D and Szummer, M and Thomson, B and Young, S (2014) Dialogue context sensitive HMM-based speech synthesis. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. pp. 2554-2558. ISSN 1520-6149

Full text not available from this repository.


The focus of this work is speech synthesis tailored to the needs of spoken dialogue systems. More specifically, the framework of HMM-based speech synthesis is utilized to train an emphatic voice that also considers dialogue context for decision tree state clustering. To achieve this, we designed and recorded a speech corpus comprising system prompts from human-computer interaction, as well as additional prompts for slot-level emphasis. This corpus, combined with a general purpose text-to-speech one, was used to train voices using a) baseline context features, b) additional emphasis features, and c) additional dialogue context features. Both emphasis and dialogue context features are extracted from the dialogue act semantic representation. The voices were evaluated in pairs for dialogue appropriateness using a preference listening test. The results show that the emphatic voice is preferred to the baseline when emphasis markup is present, while the dialogue context-sensitive voice is preferred to the plain emphatic one when no emphasis markup is present and preferable to the baseline in both cases. This demonstrates that including dialogue context features for decision tree state clustering significantly improves the quality of the synthetic voice for dialogue. © 2014 IEEE.

Item Type: Article
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 19:16
Last Modified: 22 May 2018 07:18