JOURNAL ARTICLE

Unsupervised Multi-Domain Adaptation with Feature Embeddings

Abstract

Representation learning is the dominant technique for unsupervised domain adaptation, but existing approaches have two major weaknesses.First, they often require the specification of "pivot features" that generalize across domains, which are selected by taskspecific heuristics.We show that a novel but simple feature embedding approach provides better performance, by exploiting the feature template structure common in NLP problems.Second, unsupervised domain adaptation is typically treated as a task of moving from a single source to a single target domain.In reality, test data may be diverse, relating to the training data in some ways but not others.We propose an alternative formulation, in which each instance has a vector of domain attributes, can be used to learn distill the domain-invariant properties of each feature. 1

Keywords:
Computer science Artificial intelligence Heuristics Feature learning Domain adaptation Embedding Feature (linguistics) Domain (mathematical analysis) Representation (politics) Machine learning Pattern recognition (psychology) Adaptation (eye) Feature vector Classifier (UML) Mathematics

Metrics

57
Cited By
9.74
FWCI (Field Weighted Citation Impact)
41
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Text and Document Classification Technologies
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.