JOURNAL ARTICLE

Adversarial Reinforcement Learning for Unsupervised Domain Adaptation

Abstract

Transferring knowledge from an existing labeled domain to a new domain often suffers from domain shift in which performance degrades because of differences between the domains. Domain adaptation has been a prominent method to mitigate such a problem. There have been many pre-trained neural networks for feature extraction. However, little work discusses how to select the best feature instances across different pre-trained models for both the source and target domain. We propose a novel approach to select features by employing reinforcement learning, which learns to select the most relevant features across two domains. Specifically, in this framework, we employ Q-learning to learn policies for an agent to make feature selection decisions by approximating the action-value function. After selecting the best features, we propose an adversarial distribution alignment learning to improve the prediction results. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art methods.

Keywords:
Reinforcement learning Computer science Artificial intelligence Domain (mathematical analysis) Machine learning Adversarial system Feature (linguistics) Feature extraction Adaptation (eye) Artificial neural network Function (biology) Domain adaptation Feature selection Domain knowledge Selection (genetic algorithm) Feature learning Pattern recognition (psychology) Mathematics

Metrics

15
Cited By
2.12
FWCI (Field Weighted Citation Impact)
82
Refs
0.89
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Machine Learning and Data Classification
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.