JOURNAL ARTICLE

LADDER: Latent boundary-guided adversarial training

Xiaowei ZhouIvor W. TsangJie Yin

Year: 2022 Journal:   Machine Learning Vol: 112 (10)Pages: 3851-3879   Publisher: Springer Science+Business Media

Abstract

Abstract Deep Neural Networks (DNNs) have recently achieved great success in many classification tasks. Unfortunately, they are vulnerable to adversarial attacks that generate adversarial examples with a small perturbation to fool DNN models, especially in model sharing scenarios. Adversarial training is proved to be the most effective strategy that injects adversarial examples into model training to improve the robustness of DNN models against adversarial attacks. However, adversarial training based on the existing adversarial examples fails to generalize well to standard, unperturbed test data. To achieve a better trade-off between standard accuracy and adversarial robustness, we propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining (LADDER) that adversarially trains DNN models on latent boundary-guided adversarial examples. As opposed to most of the existing methods that generate adversarial examples in the input space, LADDER generates a myriad of high-quality adversarial examples through adding perturbations to latent features. The perturbations are made along the normal of the decision boundary constructed by an SVM with an attention mechanism. We analyze the merits of our generated boundary-guided adversarial examples from a boundary field perspective and visualization view. Extensive experiments and detailed analysis on MNIST, SVHN, CelebA, and CIFAR-10 validate the effectiveness of LADDER in achieving a better trade-off between standard accuracy and adversarial robustness as compared with vanilla DNNs and competitive baselines.

Keywords:
Adversarial system Computer science Robustness (evolution) Artificial intelligence Machine learning MNIST database Deep neural networks Artificial neural network

Metrics

7
Cited By
1.37
FWCI (Field Weighted Citation Impact)
52
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Learnable Boundary Guided Adversarial Training

Jiequan CuiShu LiuLiwei WangJiaya Jia

Journal:   2021 IEEE/CVF International Conference on Computer Vision (ICCV) Year: 2021 Pages: 15721-15730
JOURNAL ARTICLE

ALAT: Adversarial Label-guided Adversarial Training

Nan WangYong YuHonghong Wang

Journal:   Pattern Recognition Letters Year: 2025 Vol: 196 Pages: 250-256
JOURNAL ARTICLE

Reliably fast adversarial training via latent adversarial perturbation

Geon Yeong ParkSang Wan Lee

Journal:   2021 IEEE/CVF International Conference on Computer Vision (ICCV) Year: 2021 Pages: 7738-7747
JOURNAL ARTICLE

Reliably fast adversarial training via latent adversarial perturbation

Geon Yeong ParkSang Wan Lee

Journal:   arXiv (Cornell University) Year: 2021 Pages: 7758-7767
© 2026 ScienceGate Book Chapters — All rights reserved.