JOURNAL ARTICLE

Vulnerability Evaluation of Android Malware Detectors against Adversarial Examples

Abstract

In this paper, we evaluate the performance of machine learning classifiers (Logistic Regression, CART, Random Forest) by fabricating adversarial examples (malware samples) statistically identical to goodware. To this end, we demonstrate three scenarios, (a) random attribute injection (b) insertion of prominent attributes from legitimate apps and (c) poisoning of class labels, for creating tainted malware samples, to mislead reduce accuracy of classification models. Experiments were conducted on data-set consisting of 15649 android applications comprising 5373 malicious and 10276 legitimate apps. The outcome of investigations demonstrates significant drop in accuracies in the range of 12-50%. However, in the absence of adversarial examples in the test set, the performance of classifiers was observed between 94.8-97.9%.

Keywords:
Computer science Malware Adversarial system Random forest Android (operating system) Machine learning Artificial intelligence Vulnerability (computing) Support vector machine Test set Android application Computer security Adversarial machine learning Data mining Operating system

Metrics

2
Cited By
0.14
FWCI (Field Weighted Citation Impact)
32
Refs
0.45
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Malware Detection Techniques
Physical Sciences →  Computer Science →  Signal Processing
Network Security and Intrusion Detection
Physical Sciences →  Computer Science →  Computer Networks and Communications
Software Testing and Debugging Techniques
Physical Sciences →  Computer Science →  Software
© 2026 ScienceGate Book Chapters — All rights reserved.