JOURNAL ARTICLE

Adversarial Mixup Synthesis Training for Unsupervised Domain Adaptation

Abstract

Domain adversarial training is a popular approach for Unsupervised Domain Adaptation (DA). However, the transferability of adversarial training framework may drop greatly on the adaptation tasks with a large distribution divergence between source and target domains. In this paper, we propose a new approach termed Adversarial Mixup Synthesis Training (AMST) to alleviate the issue. The AMST augments the training with synthesis samples by linearly interpolating between pairs of hidden representations and their domain labels. By this means, AMST encourages the model to make consistency domain prediction less confidently on interpolations points, which learn domain-specific representations with fewer directions of variance. Based on the previous work, we conduct a theoretical analysis on this phenomenon under ideal conditions and show that AMST could improve generalization ability. Finally, experiments on benchmark dataset demonstrate the effectiveness and practicability of AMST. We will publicly release our code on github soon.

Keywords:
Adversarial system Computer science Generalization Benchmark (surveying) Machine learning Artificial intelligence Domain (mathematical analysis) Consistency (knowledge bases) Divergence (linguistics) Adaptation (eye) Variance (accounting) Training (meteorology) Domain adaptation Code (set theory) Mathematics Set (abstract data type)

Metrics

6
Cited By
0.73
FWCI (Field Weighted Citation Impact)
55
Refs
0.75
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
COVID-19 diagnosis using AI
Health Sciences →  Medicine →  Radiology, Nuclear Medicine and Imaging
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.