JOURNAL ARTICLE

Universally Strict Black-Box Attacks Against Deep Neural Networks

Abstract

Adversarial attacks have demonstrated the vulnerability of deep neural networks (DNNs), which raises considerable security concerns. The existing attack methods require either prior knowledge of the victim DNNs and labels or frequent model querying. However, these requirements are usually infeasible or time-consuming, leading to suspicions about whether the attacks can be launched in real-world scenarios. To this end, we propose a universally strict black-box attack, which generates adversarial samples only using unlabeled data. It reduces the reliance on external information such as victim models, training processes, and ground-truth labels. Specifically, we first learn a latent manifold using contrastive learning. Then a novel universally adversarial loss is proposed to obtain adversaries directly in the latent space. It utilizes the dissimilarity between samples to craft perturbations without accessing labels and decision boundaries. Moreover, we propose a cluster selection for negative samples to improve the effectiveness of our attack. By evaluating the universally strict black-box attack against baseline models, we find it reaches an average fooling rate of 57.93%, which is on par with transfer-based black-box attacks. Our method shows the threats of adversarial attacks under more practical conditions and could serve as a new benchmark for assessing the robustness of DNNs.

Keywords:
Computer science Adversarial system Deep neural networks Robustness (evolution) Black box Vulnerability (computing) Artificial intelligence Benchmark (surveying) Machine learning Ground truth Deep learning Threat model Computer security

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
43
Refs
0.13
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.