Chen, X and Liu, X and Ragni, A and Wang, Y and Gales, MJF (2017) Future word contexts in neural network language models. 2017 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2017 - Proceedings, 2018-J. pp. 97-103.
Full text not available from this repository.Abstract
Recently, bidirectional recurrent network language models (bi-RNNLMs) have been shown to outperform standard, unidirectional, recurrent neural network language models (uni-RNNLMs) on a range of speech recognition tasks. This indicates that future word context information beyond the word history can be useful. However, bi-RNNLMs pose a number of challenges as they make use of the complete previous and future word context information. This impacts both training efficiency and their use within a lattice rescoring framework. In this paper these issues are addressed by proposing a novel neural network structure, succeeding word RNNLMs (suRNNLMs). Instead of using a recurrent unit to capture the complete future word contexts, a feedforward unit is used to model a finite number of succeeding, future, words. This model can be trained much more efficiently than bi-RNNLMs and can also be used for lattice rescoring. Experimental results on a meeting transcription task (AMI) show the proposed model consistently outperformed uni-RNNLMs and yield only a slight degradation compared to bi-RNNLMs in N-best rescoring. Additionally, performance improvements can be obtained using lattice rescoring and subsequent confusion network decoding.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Bidirectional recurrent neural network language model succeeding words speech recognition |
Subjects: | UNSPECIFIED |
Divisions: | Div F > Machine Intelligence |
Depositing User: | Cron Job |
Date Deposited: | 15 Mar 2018 01:38 |
Last Modified: | 08 Apr 2021 06:37 |
DOI: | 10.1109/ASRU.2017.8268922 |