Models that have the learning ability to overcome the domain shift between synthetic images and real-world images provide satisfactory results in real-world implementation even if they are trained on synthetic images. This eliminates the need for collection and annotation of real-world data which is not just time-consuming and expensive, but oftentimes impractical. With the advent of Transformers, research focus in this area is directed towards improving model capability rather than domain adaptability. The paper proposes an improved adversarial domain adapted segmentation network that uses feature distillation loss. The model uses CNN architecture DeepLabv2 as the backbone and a rare class sampler is also used for the source domain. An additional ImageNet feature distance loss is used for faster convergence and improved performance. The model trained on synthetic images is evaluated using real traffic images from Cityscapes dataset and also Kerala traffic images from Google to determine model adaptability.
Seun-An ChoeKeon-Hee ParkJin‐Woo ChoiGyeong-Moon Park
Dawei LiZongxuan ShiHao ZhangRenhao Zhang
Muxin LiaoShishun TianYuhang ZhangGuoguang HuaWenbin ZouXia Li