Abstract

Previous studies have proved that cross-lingual knowledge distillation can significantly improve the performance of pre-trained models for cross-lingual similarity matching tasks.However, the student model needs to be large in this operation.Otherwise, its performance will drop sharply, thus making it impractical to be deployed to memory-limited devices.To address this issue, we delve into cross-lingual knowledge distillation and propose a multistage distillation framework for constructing a small-size but high-performance cross-lingual model.In our framework, contrastive learning, bottleneck, and parameter recurrent strategies are combined to prevent performance from being compromised during the compression process.The experimental results demonstrate that our method can compress the size of XLM-R and MiniLM by more than 50%, while the performance is only reduced by about 1%.

Keywords:
Distillation Matching (statistics) Similarity (geometry) Compression (physics) Key (lock) Semantic similarity

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.23
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.