JOURNAL ARTICLE

Automatic speech recognition with sparse training data for dysarthric speakers

Abstract

We describe an unusual ASR application: recognition of command words from severely dysarthric speakers, who have poor control of their articulators.The goal is to allow these clients to control assistive technology by voice.While this is a small vocabulary, speaker-dependent, isolated-word application, the speech material is more variable than normal, and only a small amount of data is available for training.After training a CDHMM recogniser, it is necessary to predict its likely performance without using an independent test set,so that confusable words can be replaced by alternatives.We present a battery of measures of consistency and confusability, based on forced-alignment, which can be used to predict recogniser performance.We show how these measures perform, and how they are presented to the clinicians who are the users of the system.

Keywords:
Speech recognition Computer science Training set Speaker recognition Training (meteorology) Artificial intelligence Dysarthria Natural language processing Audiology Medicine

Metrics

87
Cited By
3.45
FWCI (Field Weighted Citation Impact)
15
Refs
0.93
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence
Voice and Speech Disorders
Health Sciences →  Medicine →  Physiology
Phonetics and Phonology Research
Social Sciences →  Psychology →  Experimental and Cognitive Psychology

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.