JOURNAL ARTICLE

Robust Speech Recognition Using KPCA-Based Noise Classification

Nattanun ThatphithakkulBoontee KruatrachueChai WutiwiwatchaiSanparith MarukatatVataya Boonpiam

Year: 1970 Journal:   ECTI Transactions on Computer and Information Technology (ECTI-CIT) Vol: 2 (1)Pages: 45-53   Publisher: Chiang Mai University

Abstract

This paper proposes an environmental noise classification method using kernel principal component analysis (KPCA) for robust speech recognition. Once the type of noise is identified, speech recognition performance can be enhanced by selecting the identified noise specific acoustic model. The proposed model applies KPCA to a set of noise features such as normalized logarithmic spectrums (NLS), and results from KPCA are used by a support vector machines (SVM) classifier for noise classification. The proposed model is evaluated with 2 groups of environments. The first group contains a clean environment and 9 types of noisy environments that have been trained in the system. Another group contains other 6 types of noises not trained in the system. Noisy speech is prepared by adding noise signals from JEIDA and NOISEX-92 to the clean speech taken from NECTEC-ATR Thai speech corpus. The proposed model shows a promising result when evaluating on the task of phoneme based 640 Thai isolatedword recognition.

Keywords:
Speech recognition Pattern recognition (psychology) Computer science Classifier (UML) Kernel principal component analysis Support vector machine Artificial intelligence Noise (video) Principal component analysis Kernel method

Metrics

4
Cited By
0.00
FWCI (Field Weighted Citation Impact)
11
Refs
0.35
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence
Music and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
© 2026 ScienceGate Book Chapters — All rights reserved.