JOURNAL ARTICLE

Lightly supervised acoustic model training using consensus networks

Abstract

The paper presents some recent work on using consensus networks to improve lightly supervised acoustic model training for the LIMSI Mandarin BN system. Lightly supervised acoustic model training has been attracting growing interest, since it can help to reduce the development costs for speech recognition systems substantially. Compared to supervised training with accurate transcriptions, the key problem in lightly supervised training is getting the approximate transcripts to be as close as possible to manually produced detailed ones, i.e., finding a proper way to provide the information for supervision. Previous work using a language model to provide supervision has been quite successful. The paper extends the original method by presenting a new way to get the information needed for supervision during training. Studies are carried out using the TDT4 Mandarin audio corpus and associated closed-captions. After automatically recognizing the training data, the closed-captions are aligned with a consensus network derived from the hypothesized lattices. As is the case with closed-caption filtering, this method can remove speech segments whose automatic transcripts contain errors, but it can also recover errors in the hypothesis if the information is present in the lattice. Experimental results show that, compared with simply training on all of the data, consensus network based lightly supervised acoustic model training results in a small reduction in the character error rate on the DARPA/NIST RT'03 development and evaluation data.

Keywords:
Computer science NIST Word error rate Artificial intelligence Mandarin Chinese Speech recognition Language model Machine learning Training (meteorology) Acoustic model Supervised learning Reduction (mathematics) Artificial neural network Natural language processing Speech processing

Metrics

18
Cited By
2.70
FWCI (Field Weighted Citation Impact)
13
Refs
0.91
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence
Music and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing

Related Documents

JOURNAL ARTICLE

Lightly supervised and unsupervised acoustic model training

Lori LamelJean‐Luc GauvainGilles Adda

Journal:   Computer Speech & Language Year: 2002 Vol: 16 (1)Pages: 115-129
DISSERTATION

Speech Recognition Enhanced by Lightly-supervised and Semi-supervised Acoustic Model Training

Sheng Li

University:   Kyoto University Research Information Repository (Kyoto University) Year: 2016
© 2026 ScienceGate Book Chapters — All rights reserved.