JOURNAL ARTICLE

Jailbreaking Prompt Attack: A Controllable Adversarial Attack against Diffusion Models

Keywords:
Adversarial system Computer science Diffusion Computer security Artificial intelligence Physics

Metrics

2
Cited By
9.64
FWCI (Field Weighted Citation Impact)
0
Refs
0.97
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Smart Grid Security and Resilience
Physical Sciences →  Engineering →  Control and Systems Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.