JOURNAL ARTICLE

Adversarial Examples for Improving End-to-end Attention-based Small-footprint Keyword Spotting

Abstract

In this paper, we explore the use of adversarial examples for improving a neural network based keyword spotting (KWS) system. Specially, in our system, an effective and small-footprint attention-based neural network model is used. Adversarial example is defined as a misclassified example by a model, but it is only slightly skewed from the original correctly-classified one. In the KWS task, it is a natural idea to regard the false alarmed or false rejected queries as some kind of adversarial examples. In our work, given a well-trained attention-based KWS model, we first generate adversarial examples using the fast gradient sign method (FGSM) and find that these examples can dramatically degrade the KWS performance. Using these adversarial examples as augmented data to retrain the KWS model, we finally achieve 45.6% relative and false reject rate (FRR) reduction at 1.0 false alarm rate (FAR) per hour on a collected dataset from a smart speaker.

Keywords:
Computer science Keyword spotting Adversarial system End-to-end principle Footprint Artificial intelligence Task (project management) False alarm Constant false alarm rate Sign (mathematics) Machine learning Artificial neural network Speech recognition Mathematics

Metrics

42
Cited By
4.45
FWCI (Field Weighted Citation Impact)
33
Refs
0.95
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence
Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.