JOURNAL ARTICLE

Efficient Differentially Private Fine-Tuning with QLoRA and Prefix Tuning for Large Language Models

Zhe‐Min TanXi XiongDong Xu

Year: 2025 Journal:   Journal of Computer Science and Artificial Intelligence Vol: 2 (3)Pages: 50-54

Abstract

Large language models (LLMs) have achieved remarkable success in natural language processing (NLP) tasks. However, fine-tuning LLMs using private datasets raises significant privacy concerns, as models can inadvertently memorize sensitive information. Differentially Private Stochastic Gradient Descent (DP-SGD) provides a mathematically rigorous solution but suffers from high computational overhead, slow convergence, and excessive privacy budget consumption, making it impractical for large-scale models. To address these challenges, we propose an efficient differentially private fine-tuning method that combines Quantized Low-Rank Adaptation (QLoRA) and Prefix Tuning. QLoRA employs 4-bit NormalFloat quantization and low-rank adaptation, significantly reducing memory consumption and improving computational efficiency. Prefix Tuning optimizes a small set of prefix vectors without modifying the model’s main parameters, further reducing the impact of DP noise. Additionally, we introduce a hybrid adaptive gradient clipping strategy, which applies sample-wise adaptive clipping for Prefix Tuning and group-wise clipping for QLoRA, effectively balancing privacy protection and model utility. We evaluate our approach on GPT-2 using benchmark datasets including E2E NLG Challenge, XSum, SST-2, and DART, measuring performance using BLEU, ROUGE, and F1-score. Results demonstrate that QLoRA + Prefix Tuning achieves up to 75% memory reduction while maintaining over 95% of the original model performance under a moderate privacy budget (ε=3), outperforming traditional DP fine-tuning methods. Our work provides a practical and scalable solution for privacy-preserving LLM fine-tuning in resource-constrained environments.

Keywords:
Prefix Fine-tuning Computer science Materials science Linguistics Physics Particle physics Philosophy

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
11
Refs
0.03
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Privacy-Preserving Technologies in Data
Physical Sciences →  Computer Science →  Artificial Intelligence
Cryptography and Data Security
Physical Sciences →  Computer Science →  Artificial Intelligence
Artificial Intelligence in Healthcare and Education
Health Sciences →  Medicine →  Health Informatics
© 2026 ScienceGate Book Chapters — All rights reserved.