JOURNAL ARTICLE

Late Adapter Tuning: A Cost-Effective Approach to Parameter-Efficient Fine-Tuning for Large Language Models

Zhengjie GaoRuiteng LiYuxin FanMin LiaoXinyu Song

Year: 2025 Journal:   International Journal of Computers Communications & Control Vol: 20 (6)   Publisher: Agora University

Abstract

Fine-tuning large language models (LLMs) is computationally prohibitive for individual researchers, especially in resource-constrained scenarios. While parameter-efficient fine-tuning (PEFT) methods address this challenge, existing approaches suffer from inefficiencies due to long backpropagation paths and hidden vector distortion. To overcome these limitations, we propose Late Adapter Tuning (LAT), a novel PEFT method that optimizes training costs by fine-tuning only a single hidden layer near the model’s output. LAT integrates a customized adapter architecture with hard prompting to preserve hidden vector dimensions and shorten gradient propagation paths. Experiments on four classification datasets demonstrate that LAT reduces training time by 2.4×, decreases GPU memory usage by 76.5%, and improves accuracy by 4.31% compared to fullparameter fine-tuning. Our work provides a practical solution for deploying LLMs in low-resource environments while advancing the theoretical understanding of gradient-efficient adaptation strategies.

Keywords:

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

© 2026 ScienceGate Book Chapters — All rights reserved.