JOURNAL ARTICLE

Learning Latent Representation for Robust Unsupervised Domain Adaptation

Abstract

Deep Neural Networks (DNNs) have achieved impressive performance for various applications, but they may not generalize well on new data due to the data distribution shift problem. This problem can manifest in various ways, such as sample selection bias, class distribution shift, and covariate shift. One of these is domain shift, which occurs when test data are sampled from a new target domain that differs from the training data in terms of appearance, background, or style. Manually annotating data in a new domain can be time-consuming and expensive. To address this issue, Unsupervised Domain Adaptation (UDA) aims to infer domain-invariant representations by using labeled source domain data and unlabeled target domain data. This thesis focuses on the UDA problem and explores a more challenging case called Robust Unsupervised Domain Adaption (RUDA), where corrupted samples may exist in the target domain. DNNs are vulnerable to feature corruptions such as well-crafted adversarial attacks and common corruptions, so the performance of DNNs needs to be certified not only on clean data but also on corrupted data. The goal of this thesis is to provide a new understanding of both UDA and RUDA from the perspective of latent representation and distribution. For vanilla UDA, we investigate the incomplete domain adaptation issue of the current advanced adversarial domain adaptation method. To solve this problem, we propose a feature gradient distribution divergence as a complementary metric. For robustness against common corruptions in UDA, we show that the key to achieving robustness is to alleviate the feature shift of corrupted samples. To accomplish this, we develop an unsupervised adversarial regularization method that penalizes these feature shifts and enables the model to better generalize to unseen types of corruptions. For robustness against adversarial attacks, we investigate how to generalize well on adversarial attacks generated from future data of the target domain. We demonstrate that reducing the feature-shift distribution divergence between the training and testing datasets of the target domain can certify better robust generalization.

Keywords:
Robustness (evolution) Domain adaptation Labeled data Feature (linguistics) Domain (mathematical analysis) Feature learning Representation (politics) Pattern recognition (psychology) Test data Adversarial system

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.22
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Face recognition and analysis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

Latent subspace sparse representation-based unsupervised domain adaptation

Shuai LiuHao SunFumin ZhaoShilin Zhou

Journal:   Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE Year: 2015 Vol: 9813 Pages: 981307-981307
JOURNAL ARTICLE

Learning Smooth Representation for Unsupervised Domain Adaptation

Guanyu CaiLianghua HeMengchu ZhouHesham AlhumadeDie Hu

Journal:   IEEE Transactions on Neural Networks and Learning Systems Year: 2021 Vol: 34 (8)Pages: 4181-4195
JOURNAL ARTICLE

Learning domain-shared group-sparse representation for unsupervised domain adaptation

Baoyao YangJ. AndyPong C. Yuen

Journal:   Pattern Recognition Year: 2018 Vol: 81 Pages: 615-632
JOURNAL ARTICLE

Adversarially robust unsupervised domain adaptation

Lianghe ShiWeiwei Liu

Journal:   Artificial Intelligence Year: 2025 Vol: 347 Pages: 104383-104383
© 2026 ScienceGate Book Chapters — All rights reserved.