JOURNAL ARTICLE

Prototypical Contrastive Learning for Domain Adaptive Semantic Segmentation

Abstract

The goal of domain adaptive semantic segmentation is to train a model using labeled source domain data and produce accurate dense predictions on the unlabeled target domain. Previous methods adopt self-training, where reliable target domain predictions are used as pseudo labels for training. However, intra-class variations across domains, such as the varying visual appearance in each category, have not been fully explored, leading to misalignment in feature distribution between the source and target domains. In this paper, we propose to optimize the feature space with representative prototypes shared across domains. Specifically, we first adopt the non-parametric clustering to model multiple prototypes for each category feature space. Then, category-discriminative feature space is obtained via pixel-to-prototype contrastive learning. Through extensive experiments, our proposed method demonstrates competitive performance on GTA5→Cityscapes and Synthia→Cityscapes benchmark. It is noteworthy that our method is compatible with the existing UDA methods.

Keywords:
Computer science Discriminative model Artificial intelligence Feature (linguistics) Segmentation Benchmark (surveying) Pattern recognition (psychology) Domain (mathematical analysis) Feature vector Cluster analysis Parametric statistics Feature learning Space (punctuation) Machine learning Mathematics

Metrics

3
Cited By
0.77
FWCI (Field Weighted Citation Impact)
64
Refs
0.72
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Cancer-related molecular mechanisms research
Life Sciences →  Biochemistry, Genetics and Molecular Biology →  Cancer Research
© 2026 ScienceGate Book Chapters — All rights reserved.