The bottleneck of visual domain adaptation always lies in the learning of domain invariant representations. In this paper, we present a simple but effective technique named Adaptive Feature Swapping for learning domain invariant features in Unsupervised Domain Adaptation (UDA). Adaptive Feature Swapping aims to select semantically irrelevant features from labeled source data and unlabeled target data and swap these features with each other. Then the merged representations are also utilized for training with prediction consistency constraints. In this way, the model is encouraged to learn representations that are robust to domain-specific information. We develop two swapping strategies including channel swapping and spatial swapping. The former encourages the model to squeeze redundancy out of features and pay more attention to semantic information. The latter motivates the model to be robust to the background and focus on objects. We conduct experiments on object recognition and semantic segmentation in UDA setting and the results show that Adaptive Feature Swapping can promote various existing UDA methods. Our codes are publicly available at https://github.com/junbaoZHUO/AFS.
Duo PengQiuhong KeArulMurugan AmbikapathiYasin YazıcıYinjie LeiJun Liu
Sifan LongShengsheng WangXin ZhaoZihao FuBilin Wang
Yanan LiYifei LiuDingrun ZhengYuhan HuangYuling Tang
Xinzhi CaoYinsai GuoWenbin YangXiangfeng LuoShaorong Xie