We address the problem of using word graphs (or lattices) for the integration of complex knowledge sources like long span language models or acoustic cross-word models, in large vocabulary continuous speech recognition. A method for efficiently constructing a word graph is reviewed and two ways of exploiting it are presented. By assuming the word pair approximation, a phrase level search is possible while in the other case a general graph decoder is set up. We show that the predecessor-word identity provided by a first bigram decoding might be used to constrain the word graph without impairing the next pass. This procedure has been applied to 64 k-word trigram decoding in conjunction with an incremental unsupervised speaker adaptation scheme. Experimental results are given for the North American Business corpus used in the November '94 evaluation.
Philip C. WoodlandJ. J. OdellV. ValtchevS.J. Young
L.R. BahlJ.R. BellegardaP.V. de SouzaP.S. GopalakrishnanD. NahamooMichael Picheny
S. OrtmannsHermann NeyXavier Aubert