JOURNAL ARTICLE

On the Adversarial Robustness of Multi-Modal Foundation Models

Abstract

Multi-modal foundation models combining vision and language models such as Flamingo or GPT-4 have recently gained enormous interest. Alignment of foundation models is used to prevent models from providing toxic or harmful output. While malicious users have successfully tried to jailbreak foundation models, an equally important question is if honest users could be harmed by malicious third-party content. In this paper we show that imperceivable attacks on images $\left({{\varepsilon _\infty } = 1/255}\right)$ in order to change the caption output of a multi-modal foundation model can be used by malicious content providers to harm honest users e.g. by guiding them to malicious websites or broadcast fake information. This indicates that countermeasures to adversarial attacks should be used by any deployed multi-modal foundation model. Note: This paper contains fake information to illustrate the outcome of our attacks. It does not reflect the opinion of the authors.

Keywords:
Foundation (evidence) Modal Robustness (evolution) Computer science Harm Adversarial system Computer security Internet privacy Artificial intelligence Political science

Metrics

49
Cited By
8.92
FWCI (Field Weighted Citation Impact)
47
Refs
0.98
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.