JOURNAL ARTICLE

Contrast, Stylize and Adapt: Unsupervised Contrastive Learning Framework for Domain Adaptive Semantic Segmentation

Abstract

To overcome the domain gap between synthetic and real-world datasets, unsupervised domain adaptation methods have been proposed for semantic segmentation. Majority of the previous approaches have attempted to reduce the gap either at the pixel or feature level, disregarding the fact that the two components interact positively. To address this, we present CONtrastive FEaTure and pIxel alignment (CON-FETI) for bridging the domain gap at both the pixel and feature levels using a unique contrastive formulation. We introduce well-estimated prototypes by including category-wise cross-domain information to link the two alignments: the pixel-level alignment is achieved using the jointly trained style transfer module with the prototypical semantic consistency, while the feature-level alignment is enforced to cross-domain features with the pixel-to-prototype contrast. Our extensive experiments demonstrate that our method outperforms existing state-of-the-art methods using DeepLabV2. Our code 1 has been made publicly available.

Keywords:
Computer science Bridging (networking) Artificial intelligence Domain adaptation Segmentation Pixel Feature (linguistics) Semantic gap Contrast (vision) Pattern recognition (psychology) Domain (mathematical analysis) Dimension (graph theory) Feature extraction Consistency (knowledge bases) Natural language processing Image (mathematics) Image retrieval Mathematics

Metrics

11
Cited By
2.81
FWCI (Field Weighted Citation Impact)
75
Refs
0.89
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.