JOURNAL ARTICLE

Bidirectional recurrent neural network language models for automatic speech recognition

Abstract

Recurrent neural network language models have enjoyed great success in speech recognition, partially due to their ability to model longer-distance context than word n-gram models. In recurrent neural networks (RNNs), contextual information from past inputs is modeled with the help of recurrent connections at the hidden layer, while Long Short-Term Memory (LSTM) neural networks are RNNs that contain units that can store values for arbitrary amounts of time. While conventional unidirectional networks predict outputs from only past inputs, one can build bidirectional networks that also condition on future inputs. In this paper, we propose applying bidirectional RNNs and LSTM neural networks to language modeling for speech recognition. We discuss issues that arise when utilizing bidirectional models for speech, and compare unidirectional and bidirectional models on an English Broadcast News transcription task. We find that bidirectional RNNs significantly outperform unidirectional RNNs, but bidirectional LSTMs do not provide any further gain over their unidirectional counterparts.

Keywords:
Recurrent neural network Computer science Language model Speech recognition Context (archaeology) Artificial neural network Artificial intelligence Time delay neural network Long short term memory Context model

Metrics

83
Cited By
6.60
FWCI (Field Weighted Citation Impact)
24
Refs
0.98
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Latent Words Recurrent Neural Network Language Models for Automatic Speech Recognition

Ryo MasumuraTaichi AsamiTakanobu ObaSumitaka SakauchiAkinori Ito

Journal:   IEICE Transactions on Information and Systems Year: 2019 Vol: E102.D (12)Pages: 2557-2567
DISSERTATION

Context Enhancement of Recurrent Neural Network Language Models for Automatic Speech Recognition

Michael Hentschel

University:   NAIST Digital Library (Nara Institute of Science and Technology) Year: 2019
JOURNAL ARTICLE

Automatic Speech Recognition using Recurrent Neural Network

Sruthi Vandhana T

Journal:   International Journal of Engineering Research and Year: 2020 Vol: V9 (08)
© 2026 ScienceGate Book Chapters — All rights reserved.