JOURNAL ARTICLE

Evaluating Resource-Lean Cross-Lingual Embedding Models in Unsupervised Retrieval

Abstract

Cross-lingual embeddings (CLE) facilitate cross-lingual natural language processing and information retrieval. Recently, a wide variety of resource-lean projection-based models for inducing CLEs has been introduced, requiring limited or no bilingual supervision. Despite potential usefulness in downstream IR and NLP tasks, these CLE models have almost exclusively been evaluated on word translation tasks. In this work, we provide a comprehensive comparative evaluation of projection-based CLE models for both sentence-level and document-level cross-lingual Information Retrieval (CLIR). We show that in some settings resource-lean CLE-based CLIR models may outperform resource-intensive models using full-blown machine translation (MT). We hope our work serves as a guideline for choosing the right model for CLIR practitioners.

Keywords:
Computer science Machine translation Natural language processing Artificial intelligence Projection (relational algebra) Sentence Cross-language information retrieval Translation (biology) Resource (disambiguation) Embedding Information retrieval

Metrics

36
Cited By
2.30
FWCI (Field Weighted Citation Impact)
18
Refs
0.90
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Graph Neural Networks
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.