CUED Publications database

Personalising speech-to-speech translation in the EMIME project

Kurimo, M and Byrne, W and Dines, J and Garner, PN and Gibson, M and Guan, Y and Hirsimäki, T and Karhila, R and King, S and Liang, H and Oura, K and Saheer, L and Shannon, M and Shiota, S and Tian, J and Tokuda, K and Wester, M and Wu, YJ and Yamagishi, J (2010) Personalising speech-to-speech translation in the EMIME project. In: UNSPECIFIED pp. 48-53..

Full text not available from this repository.


In the EMIME project we have studied unsupervised cross-lingual speaker adaptation. We have employed an HMM statistical framework for both speech recognition and synthesis which provides transformation mechanisms to adapt the synthesized voice in TTS (text-to-speech) using the recognized voice in ASR (automatic speech recognition). An important application for this research is personalised speech-to-speech translation that will use the voice of the speaker in the input language to utter the translated sentences in the output language. In mobile environments this enhances the users' interaction across language barriers by making the output speech sound more like the original speaker's way of speaking, even if she or he could not speak the output language. © 2010 Association for Computational Linguistics.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 25 Mar 2019 20:04
Last Modified: 18 Feb 2021 16:26