JOURNAL ARTICLE

Model Compression vs. Adversarial Robustness: An Empirical Study on Language Models for Code

Awal, Md Abdul

Year: 2025 Journal:   Zenodo (CERN European Organization for Nuclear Research)   Publisher: European Organization for Nuclear Research

Abstract

Transformer-based language models for code have shown remarkable performance in various software analytics tasks, but their adoption is hindered by high computational costs, slow inference speeds, and substantial environmental impact. Model compression techniques such as pruning, quantization, and knowledge distillation have gained traction in addressing these challenges. However, the impact of these strategies on the robustness of compressed language models for code in adversarial scenarios remains poorly understood. Understanding how these compressed models behave under adversarial attacks is essential for their safe and effective deployment in real-world applications. To bridge this knowledge gap, we conduct a comprehensive evaluation of how common compression strategies affect the adversarial robustness of compressed models. We assess the robustness of compressed versions of three widely used language models for code across three software analytics tasks, using six evaluation metrics and four commonly used classical adversarial attacks. Our findings indicate that compressed models generally maintain comparable performance to their uncompressed counterparts. However, when subjected to adversarial attacks, compressed models exhibit significantly reduced robustness. This vulnerability is consistent across all three compression techniques, with knowledge-distilled models experiencing the most pronounced degradation in performance. These results reveal a trade-off between model size reduction and adversarial robustness, underscoring the need for careful consideration when deploying compressed models in security-critical software applications. Our study highlights the need for further research into compression strategies that strike a balance between computational efficiency and adversarial robustness, which is essential for deploying reliable language models for code in real-world software applications.

Keywords:
Adversarial system Software Source lines of code Robustness (evolution) Source code Software deployment Language model Empirical research

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.53
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Anaerobic Digestion and Biogas Production
Physical Sciences →  Engineering →  Building and Construction
Enzyme Catalysis and Immobilization
Life Sciences →  Biochemistry, Genetics and Molecular Biology →  Molecular Biology
Phosphorus and nutrient management
Physical Sciences →  Environmental Science →  Industrial and Manufacturing Engineering

Related Documents

JOURNAL ARTICLE

Model Compression vs. Adversarial Robustness: An Empirical Study on Language Models for Code

Awal, Md Abdul

Journal:   Zenodo (CERN European Organization for Nuclear Research) Year: 2025
JOURNAL ARTICLE

Model Compression vs. Adversarial Robustness: An Empirical Study on Pre-trained Models of Code

Anonymous

Journal:   Zenodo (CERN European Organization for Nuclear Research) Year: 2025
JOURNAL ARTICLE

Differential Robustness in Transformer Language Models: Empirical Evaluation Under Adversarial Text Attacks

Taniya GidatkarOluwaseun AjaoMatthew Shardlow

Journal:   International conference Recent advances in natural language processing Year: 2025 Vol: 2025 Pages: 395-402
© 2026 ScienceGate Book Chapters — All rights reserved.