JOURNAL ARTICLE

Vector quantization of speech using artificial neural learning

Abstract

Artificial neural learning methods can be employed for low bit-rate speech compression in non-stationary environments. Vector quantization (VQ) has been used for many years and can perform speech compression to obtain bit-rates lower than 2400 bits per second (bps). A class of artificial neural networks with unsupervised learning algorithms are particularly well suited for the VQ problems. In this paper we discuss the use of unsupervised learning algorithms for obtaining the codebook vectors in an adaptive vector quantizer. In contrast to the earlier work, we have employed these learning rules in VQ of the prediction residual after LPC and pitch prediction. The performance of these unsupervised learning algorithms for speaker-dependent and speaker-independent speech compression will be presented. Our results compare favourably with those of CELP requiring reduced computational power with a tolerable reduction in speech quality. The effects of limited precision on classification and learning in competitive learning algorithms are also explored in this study.

Keywords:
Codebook Vector quantization Computer science Learning vector quantization Artificial neural network Unsupervised learning Artificial intelligence Speech recognition Linde–Buzo–Gray algorithm Speech coding Code-excited linear prediction Competitive learning Residual Pattern recognition (psychology) Linear predictive coding Algorithm

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
14
Refs
0.04
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Advanced Data Compression Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Digital Filter Design and Implementation
Physical Sciences →  Computer Science →  Signal Processing

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.