JOURNAL ARTICLE

Defending against adversarial attacks in deep neural networks

Abstract

We focus on defending against adversarial attacks in deep neural networks using signal analysis technology. The method employs a novel signal processing theory as a defense to adversarial perturbations. The method neither modifies the protected network nor requires knowledge of the process for generating adversarial examples. Extensive evaluation experiments demonstrate the efficiency and effectiveness of the proposed adversarial defending method.

Keywords:
Adversarial system Computer science Focus (optics) Deep neural networks Artificial neural network Process (computing) Artificial intelligence SIGNAL (programming language) Signal processing Machine learning Telecommunications

Metrics

1
Cited By
0.15
FWCI (Field Weighted Citation Impact)
31
Refs
0.55
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Integrated Circuits and Semiconductor Failure Analysis
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Physical Unclonable Functions (PUFs) and Hardware Security
Physical Sciences →  Computer Science →  Hardware and Architecture

Related Documents

JOURNAL ARTICLE

Compressive imaging for defending deep neural networks from adversarial attacks

Vladislav KravetsBahram JavidiAdrian Stern

Journal:   Optics Letters Year: 2021 Vol: 46 (8)Pages: 1951-1951
JOURNAL ARTICLE

Optical firewall for defending deep neural networks from adversarial attacks

Vladislav KravetsBahram JavidiAdrian Stern

Journal:   Frontiers in Optics + Laser Science 2021 Year: 2021 Pages: FW5A.2-FW5A.2
JOURNAL ARTICLE

Defending Deep Learning Models Against Adversarial Attacks

Nag ManiMelody MohTeng-Sheng Moh

Journal:   International Journal of Software Science and Computational Intelligence Year: 2020 Vol: 13 (1)Pages: 1-18
© 2026 ScienceGate Book Chapters — All rights reserved.