JOURNAL ARTICLE

Adversarial Attacks Against Network Intrusion Detection Systems

Sharma, Sanidhya

Year: 2024 Journal:   OPAL (Open@LaTrobe) (La Trobe University)   Publisher: La Trobe University

Abstract

The explosive growth of computer networks over the past few decades has significantly enhanced communication capabilities. However, this expansion has also attracted malicious attackers seeking to compromise and disable these networks for personal gain. Network Intrusion Detection Systems (NIDS) were developed to detect threats and alert users to potential attacks. As the types and methods of attacks have grown exponentially, NIDS have struggled to keep pace. A paradigm shift occurred when NIDS began using Machine Learning (ML) to differentiate between anomalous and normal traffic, alleviating the challenge of tracking and defending against new attacks. However, the adoption of ML-based anomaly detection in NIDS has unraveled a new avenue of exploitation due to the inherent inadequacy of machine learning models - their susceptibility to adversarial attacks.In this work, we explore the application of adversarial attacks from the image domain to bypass Network Intrusion Detection Systems (NIDS). We evaluate both white-box and black-box adversarial attacks against nine popular ML-based NIDS models. Specifically, we investigate Projected Gradient Descent (PGD) attacks on two ML models, transfer attacks using adversarial examples generated by the PGD attack, the score-based Zeroth Order Optimization attack, and two boundary-based attacks, namely the Boundary and HopSkipJump attacks. Through comprehensive experiments using the NSL-KDD dataset, we find that logistic regression and multilayer perceptron models are highly vulnerable to all studied attacks, whereas decision trees, random forests, and XGBoost are moderately vulnerable to transfer attacks or PGD-assisted transfer attacks with approximately 60 to 70% attack success rate (ASR), but highly susceptible to targeted HopSkipJump or Boundary attacks with close to a 100% ASR. Moreover, SVM-linear is highly vulnerable to both transfer attacks and targeted HopSkipJump or Boundary attacks achieving around 100% ASR, whereas SVM-rbf is highly vulnerable to transfer attacks with a 77% ASR but only moderately to targeted HopSkipJump or Boundary attacks with a 52% ASR. Finally, both KNN and Label Spreading models exhibit robustness against transfer-based attacks with less than 30% ASR but are highly vulnerable to targeted HopSkipJump or Boundary attacks with a 100% ASR with a large perturbation. Our findings may provide insights for designing future NIDS that are robust against potential adversarial attacks.

Keywords:
Intrusion detection system Adversarial system Attack model Adversarial machine learning Transfer of learning Boundary (topology) Gradient descent Domain (mathematical analysis)

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.35
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Network Security and Intrusion Detection
Physical Sciences →  Computer Science →  Computer Networks and Communications
Smart Grid Security and Resilience
Physical Sciences →  Engineering →  Control and Systems Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.