JOURNAL ARTICLE

Language model adaptation using dynamic marginals

Abstract

A new method is presented to quickly adapt a given language model to local text characteristics. The basic approach is to choose the adaptive models as close as possible to the background estimates while constraining them to respect the locally estimated unigram probabilities. Several means are investigated to speed up the calculations. We measure both perplexity and word error rate to gauge the quality of our model.

Keywords:
Computer science Adaptation (eye) Natural language processing Language model Artificial intelligence Programming language Psychology

Metrics

91
Cited By
3.22
FWCI (Field Weighted Citation Impact)
6
Refs
0.93
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Bayesian Methods and Mixture Models
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.