JOURNAL ARTICLE

Parameter Efficient Transfer Learning for Various Speech Processing Tasks

Abstract

Fine-tuning of self-supervised models is a powerful transfer learning method in a variety of fields, including speech processing, since it can utilize generic feature representations obtained from large amounts of unlabeled data. Fine-tuning, however, requires a new parameter set for each downstream task, which is parameter inefficient. Adapter architecture is proposed to partially solve this issue by inserting lightweight learnable modules into a frozen pre-trained model. However, existing adapter architectures fail to adaptively leverage low-to high-level features stored in different layers, which is necessary for solving various kinds of speech processing tasks. Thus, we propose a new adapter architecture to acquire feature representations more flexibly for various speech tasks. In experiments, we applied this adapter to WavLM on four speech tasks. It performed on par or better than naïve fine-tuning, with only 11% of learnable parameters. It also outperformed an existing adapter architecture. Our implementation code is available at https://github.com/sinhat98/adapter-wavlm

Keywords:
Adapter (computing) Computer science Leverage (statistics) Transfer of learning Architecture Speech recognition Artificial intelligence Computer hardware

Metrics

12
Cited By
3.07
FWCI (Field Weighted Citation Impact)
40
Refs
0.90
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence
Music and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.