JOURNAL ARTICLE

Neural networks, maximum mutual information training, and maximum likelihood training (speech recognition)

L.T. NilesH.F. SilvermanMarcia A. Bush

Year: 2002 Journal:   International Conference on Acoustics, Speech, and Signal Processing Pages: 493-496

Abstract

A Gaussian-model classifier trained by maximum mutual information estimation (MMIE) is compared to one trained by maximum-likelihood estimation (MLE) and to an artificial neural network (ANN) on several classification tasks. Similarity of MMIE and ANN results for uniformly distributed data confirm that the ANN is better than the MLE in some cases due to the ANNs use of an error-correcting training algorithm. When the probability model fits the data well, MLE is better than MMIE if the training data are limited, but they are equal if there are enough data. When the model is a poor fit, MMIE is better than MLE. Training dynamics of MMIE and ANN are shown to be similar under certain assumptions. MMIE seems more susceptible to overtraining and computational difficulties than the ANN. Overall, ANN is the most robust of the classifiers.< >

Keywords:
Artificial neural network Computer science Artificial intelligence Maximum likelihood Gaussian Overtraining Training set Machine learning Classifier (UML) Mutual information Training (meteorology) Pattern recognition (psychology) Similarity (geometry) Speech recognition Mathematics Statistics

Metrics

9
Cited By
1.54
FWCI (Field Weighted Citation Impact)
5
Refs
0.86
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Target Tracking and Data Fusion in Sensor Networks
Physical Sciences →  Computer Science →  Artificial Intelligence
Blind Source Separation Techniques
Physical Sciences →  Computer Science →  Signal Processing
© 2026 ScienceGate Book Chapters — All rights reserved.