JOURNAL ARTICLE

Mixed Bayesian networks with auxiliary variables for automatic speech recognition

Abstract

Standard hidden Markov models (HMMs), as used in automatic speech recognition (ASR), calculate their emission probabilities by an artificial neural network (ANN) or a Gaussian distribution conditioned on the hidden state variable, considering the emissions independent of any other variable in the model. Recent work showed the benefit of conditioning the emission distributions on a discrete auxiliary variable, which is observed in training and hidden in recognition. Related work has shown the utility of conditioning the emission distributions on a continuous auxiliary variable. We apply mixed Bayesian networks (BNs) to extend these works by introducing a continuous auxiliary variable that is observed in training but is hidden in recognition. We find that an auxiliary pitch variable conditioned itself upon the hidden state can degrade performance unless the auxiliary variable is also hidden. The performance, furthermore, can be improved by making the auxiliary pitch variable independent of the hidden state.

Keywords:
Hidden Markov model Hidden variable theory Variable (mathematics) Gaussian Hidden semi-Markov model Speech recognition Computer science Artificial neural network Mixture model Pattern recognition (psychology) Artificial intelligence Bayesian probability Bayesian network Markov model Mathematics Markov chain Machine learning Variable-order Markov model Physics

Metrics

5
Cited By
1.53
FWCI (Field Weighted Citation Impact)
16
Refs
0.86
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence
Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Target Tracking and Data Fusion in Sensor Networks
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.