Liu, X and Gales, MJF and Woodland, PC (2012) Paraphrastic language models. 13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012, 2. pp. 1654-1657.Full text not available from this repository.
In natural languages multiple word sequences can represent the same underlying meaning. Only modelling the observed surface word sequence can result in poor context coverage, for example, when using n-gram language models (LM). To handle this issue, this paper presents a novel form of language model, the paraphrastic LM. A phrase level transduction model that is statistically learned from standard text data is used to generate paraphrase variants. LM probabilities are then estimated by maximizing their marginal probability. Significant error rate reductions of 0.5%-0.6% absolute were obtained on a state-ofthe-art conversational telephone speech recognition task using a paraphrastic multi-level LM modelling both word and phrase sequences.
|Divisions:||Div F > Machine Intelligence|
|Depositing User:||Unnamed user with email firstname.lastname@example.org|
|Date Deposited:||18 May 2016 17:55|
|Last Modified:||30 Jun 2016 23:54|