CUED Publications database

Robust excitation-based features for Automatic Speech Recognition

Drugman, T and Stylianou, Y and Chen, L and Chen, X and Gales, MJF (2015) Robust excitation-based features for Automatic Speech Recognition. In: UNSPECIFIED pp. 4664-4668..

Full text not available from this repository.

Abstract

© 2015 IEEE. In this paper we investigate the use of noise-robust features characterizing the speech excitation signal as complementary features to the usually considered vocal tract based features for Automatic Speech Recognition (ASR). The proposed Excitation-based Features (EBF) are tested in a state-of-the-art Deep Neural Network (DNN) based hybrid acoustic model for speech recognition. The suggested excitation features expand the set of periodicity features previously considered for ASR, expecting that these features help in a better discrimination of the broad phonetic classes (e.g., fricatives, nasal, vowels, etc.). Our experiments on the AMI meeting transcription system showed that the proposed EBF yield a relative word error rate reduction of about 5% when combined with conventional PLP features. Further experiments led on Aurora4 confirmed the robustness of the EBF to both additive and convolutive noises, with a relative improvement of 4.3% obtained by combinining them with mel filter banks.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Subjects: UNSPECIFIED
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 19:37
Last Modified: 15 Aug 2017 01:26
DOI: