CUED Publications database

Two efficient lattice rescoring methods using recurrent neural network language models

Liu, X and Chen, X and Wang, Y and Gales, MJF and Woodland, PC (2016) Two efficient lattice rescoring methods using recurrent neural network language models. IEEE/ACM Transactions on Audio Speech and Language Processing, 24. pp. 1438-1449. ISSN 2329-9290

Full text not available from this repository.


© 2014 IEEE. An important part of the language modelling problem for automatic speech recognition (ASR) systems, and many other related applications, is to appropriately model long-distance context dependencies in natural languages. Hence, statistical language models (LMs) that can model longer span history contexts, for example, recurrent neural network language models (RNNLMs), have become increasingly popular for state-of-the-art ASR systems. As RNNLMs use a vector representation of complete history contexts, they are normally used to rescore N-best lists. Motivated by their intrinsic charac teristics, two efficient lattice rescoring methods for RNNLMs are proposed in this paper. The first method uses an n-gram style clustering of history contexts. The second approach directly exploits the distance measure between recurrent hidden history vectors. Both methods produced 1-best performance comparable to a 10 k-best rescoring baseline RNNLM system on two large vocabulary conversational telephone speech recognition tasks for US English and Mandarin Chinese. Consistent lattice size compression and recognition performance improvements after confusion network (CN) decoding were also obtained over the prefix tree structured N-best rescoring approach.

Item Type: Article
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 19:00
Last Modified: 07 Jun 2018 02:08