JOURNAL ARTICLE

Model Compression vs. Adversarial Robustness: An Empirical Study on Pre-trained Models of Code

Anonymous

Year: 2025 Journal:   Zenodo (CERN European Organization for Nuclear Research)   Publisher: European Organization for Nuclear Research

Abstract

Transformer-based large pre-trained models of code (PTMCs) have shown remarkable performance in various software analytics tasks, but their adoption is hindered by high computational costs, slow inference speeds, and substantial environmental impact. Model compression techniques such as pruning, quantization, and knowledge distillation have gained traction in addressing these challenges. However, the impact of these strategies on the robustness of PTMCs in adversarial scenarios remains poorly understood. Understanding how these compressed PTMCs behave under adversarial attacks is essential for their safe and effective deployment. To bridge this knowledge gap, we conduct a comprehensive evaluation of how common compression strategies affect the adversarial robustness of PTMCs. We assess the robustness of compressed versions of three widely used PTMCs across three software analytics tasks, using six evaluation metrics and four commonly used classical adversarial attacks. Our findings indicate that compressed models generally maintain comparable performance to their uncompressed counterparts. However, when subjected to adversarial attacks, compressed models exhibit significantly reduced robustness. This vulnerability is consistent across all three compression techniques, with knowledge-distilled models experiencing the most pronounced degradation in performance. These results reveal a trade-off between model size reduction and adversarial robustness, underscoring the need for careful consideration when deploying compressed PTMCs in security-critical software applications. Our study highlights the necessity of further research into compression strategies that balance computational efficiency and adversarial robustness, thereby enabling the deployment of reliable PTMCs in real-world software applications.

Keywords:
Adversarial system Software Robustness (evolution) Software deployment Vulnerability assessment Source code Source lines of code

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.24
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Malware Detection Techniques
Physical Sciences →  Computer Science →  Signal Processing
Security and Verification in Computing
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.