CUED Publications database

Multiple-average-voice-based speech synthesis

Lanchantin, P and Gales, MJF and King, S and Yamagishi, J (2014) Multiple-average-voice-based speech synthesis. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. pp. 285-289. ISSN 1520-6149

Full text not available from this repository.

Abstract

This paper describes a novel approach for the speaker adaptation of statistical parametric speech synthesis systems based on the interpolation of a set of average voice models (AVM). Recent results have shown that the quality/naturalness of adapted voices depends on the distance from the average voice model used for speaker adaptation. This suggests the use of several AVMs trained on carefully chosen speaker clusters from which a more suitable AVM can be selected/interpolated during the adaptation. In the proposed approach a set of AVMs, a multiple-AVM, is trained on distinct clusters of speakers which are iteratively re-assigned during the estimation process initialised according to metadata. During adaptation, each AVM from the multiple-AVM is first adapted towards the target speaker. The adapted means from the AVMs are then interpolated to yield the final speaker adapted mean for synthesis. It is shown, performing speaker adaptation on a corpus of British speakers with various regional accents, that the quality/naturalness of synthetic speech of adapted voices is significantly higher than when considering a single factor-independent AVM selected according to the target speaker characteristics. © 2014 IEEE.

Item Type: Article
Subjects: UNSPECIFIED
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 19:05
Last Modified: 01 Aug 2017 03:02
DOI: