Abstract

Models that have the learning ability to overcome the domain shift between synthetic images and real-world images provide satisfactory results in real-world implementation even if they are trained on synthetic images. This eliminates the need for collection and annotation of real-world data which is not just time-consuming and expensive, but oftentimes impractical. With the advent of Transformers, research focus in this area is directed towards improving model capability rather than domain adaptability. The paper proposes an improved adversarial domain adapted segmentation network that uses feature distillation loss. The model uses CNN architecture DeepLabv2 as the backbone and a rare class sampler is also used for the source domain. An additional ImageNet feature distance loss is used for faster convergence and improved performance. The model trained on synthetic images is evaluated using real traffic images from Cityscapes dataset and also Kerala traffic images from Google to determine model adaptability.

Keywords:
Computer science Adaptability Artificial intelligence Segmentation Domain adaptation Synthetic data Feature (linguistics) Domain (mathematical analysis) Focus (optics) Machine learning Feature extraction Annotation Image segmentation Pattern recognition (psychology) Classifier (UML)

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
22
Refs
0.21
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

COVID-19 diagnosis using AI
Health Sciences →  Medicine →  Radiology, Nuclear Medicine and Imaging
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.