JOURNAL ARTICLE

Boosting Adversarial Training with Hardness-Guided Attack Strategy

Shiyuan HeJiwei WeiChaoning ZhangXing XuJingkuan SongYang YangHeng Tao Shen

Year: 2024 Journal:   IEEE Transactions on Multimedia Vol: 26 Pages: 7748-7760   Publisher: Institute of Electrical and Electronics Engineers

Abstract

The susceptibility of deep neural networks (DNNs) to adversarial examples has raised significant concerns regarding the security and reliability of artificial intelligence systems. These examples contain maliciously crafted perturbations not perceptible to the human eye but can cause the model to make wrong predictions. Adversarial training (AT) is the de facto standard method for enhancing adversarial robustness. However, the improved robustness is often at the cost of a significant drop in standard accuracy for clean samples. Numerous works have attempted to alleviate this trade-off by identifying its causes. A key factor lies in the variability of clean samples, which leads to different adversarial examples being generated using the same attack strategy. The other factor is the disruption of the underlying data structure caused by adversarial perturbations. To overcome these challenges, we propose a novel adversarial training framework named Hardness-Guided Sample-Dependent Adversarial Training (HGSD-AT), which dynamically adjusts the attack strategy based on the hardness of the current adversarial sample to further improve the robustness of the model. By utilizing the two types of constraints which construct from a temporal perspective and spatial distribution perspective, our method directly learns the impact of attack methods on the model, rather than the indirect effects associated with sample distribution. This approach aims to improve the generation of adversarial examples while simultaneously enhancing the robustness and accuracy of DNNs. Our approach exhibits superior performance in terms of both robustness and natural accuracy compared to state-of-the-art defense methods, as validated through comprehensive experiments conducted on three benchmark datasets.

Keywords:
Boosting (machine learning) Adversarial system Computer science Artificial intelligence Machine learning Training (meteorology)

Metrics

5
Cited By
3.19
FWCI (Field Weighted Citation Impact)
77
Refs
0.88
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Malware Detection Techniques
Physical Sciences →  Computer Science →  Signal Processing
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

LAS-AT: Adversarial Training with Learnable Attack Strategy

Xiaojun JiaYong ZhangBaoyuan WuKe MaJue WangXiaochun Cao

Journal:   2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Year: 2022 Pages: 13388-13398
JOURNAL ARTICLE

Boosting Adversarial Transferability Through Adversarial Attack Enhancer

Wenli ZengHong HuangJixin Chen

Journal:   Applied Sciences Year: 2025 Vol: 15 (18)Pages: 10242-10242
JOURNAL ARTICLE

Boosting Fast Adversarial Training With Learnable Adversarial Initialization

Xiaojun JiaYong ZhangBaoyuan WuJue WangXiaochun Cao

Journal:   IEEE Transactions on Image Processing Year: 2022 Vol: 31 Pages: 4417-4430
JOURNAL ARTICLE

Boosting Adversarial Training with Learnable Distribution

Kai ChenJinwei WangJames Msughter AdekeGuangjie LiuYuewei Dai

Journal:   Computers, materials & continua/Computers, materials & continua (Print) Year: 2024 Vol: 78 (3)Pages: 3247-3265
JOURNAL ARTICLE

Boosting cross‐task adversarial attack with random blur

Yaoyuan ZhangYu‐an TanMing-Feng LuTian ChenYuanzhang LiQuanxin Zhang

Journal:   International Journal of Intelligent Systems Year: 2022 Vol: 37 (10)Pages: 8139-8154
© 2026 ScienceGate Book Chapters — All rights reserved.