JOURNAL ARTICLE

Delving Deep into the Generalization of Vision Transformers under Distribution Shifts

Abstract

Vision Transformers (ViTs) have achieved impressive performance on various vision tasks, yet their generalization under distribution shifts (DS) is rarely understood. In this work, we comprehensively study the out-of-distribution (OOD) generalization of ViTs. For systematic investigation, we first present a taxonomy of DS. We then perform extensive evaluations of ViT variants under different DS and compare their generalization with Convolutional Neural Network (CNN) models. Important observations are obtained: 1) ViTs learn weaker biases on backgrounds and textures, while they are equipped with stronger inductive biases towards shapes and structures, which is more consistent with human cognitive traits. Therefore, ViTs generalize better than CNNs under DS. With the same or less amount of parameters, ViTs are ahead of corresponding CNNs by more than 5% in top-1 accuracy under most types of DS. 2) As the model scale increases, ViTs strengthen these biases and thus gradually narrow the in-distribution and OOD performance gap. To further improve the generalization of ViTs, we design the Generalization-Enhanced ViTs (GE-ViTs) from the perspectives of adversarial learning, information theory, and self-supervised learning. By comprehensively investigating these GE-ViTs and comparing with their corresponding CNN models, we observe: 1) For the enhanced model, larger ViTs still benefit more for the OOD generalization. 2) GE-ViTs are more sensitive to the hyper-parameters than their corresponding CNN models. We design a smoother learning strategy to achieve a stable training process and obtain performance improvements on OOD data by 4% from vanilla ViTs. We hope our comprehensive study could shed light on the design of more generalizable learning architectures. Codes and datasets are released in https://github.com/Phoenix1153/ViT_OOD_generalization.

Keywords:
Generalization Computer science Artificial intelligence Convolutional neural network Transformer Machine learning Deep learning Artificial neural network Mathematics Engineering

Metrics

85
Cited By
9.99
FWCI (Field Weighted Citation Impact)
59
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Retinal Imaging and Analysis
Health Sciences →  Medicine →  Radiology, Nuclear Medicine and Imaging
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

Delving Deeper Into Astromorphic Transformers

Md Zesun Ahmed MiaMalyaban BalAbhronil Sengupta

Journal:   IEEE Transactions on Cognitive and Developmental Systems Year: 2025 Vol: 17 (6)Pages: 1436-1446
JOURNAL ARTICLE

Delving into Deep Learning

Brian Hayes

Journal:   American Scientist Year: 2014 Vol: 102 (3)Pages: 186-186
JOURNAL ARTICLE

CrossNorm and SelfNorm for Generalization under Distribution Shifts

Zhiqiang TangYunhe GaoYi ZhuZhi ZhangMu LiDimitris Metaxas

Journal:   2021 IEEE/CVF International Conference on Computer Vision (ICCV) Year: 2021 Pages: 52-61
© 2026 ScienceGate Book Chapters — All rights reserved.