JOURNAL ARTICLE

Improving Meta-learning for Low-resource Text Classification and Generation via Memory Imitation

Abstract

Building models of natural language processing (NLP) is challenging in low-resource scenarios where only limited data are available.Optimization-based meta-learning algorithms achieve promising results in low-resource scenarios by adapting a well-generalized model initialization to handle new tasks.Nonetheless, these approaches suffer from the memorization overfitting issue, where the model tends to memorize the meta-training tasks while ignoring support sets when adapting to new tasks.To address this issue, we propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation.Specifically, we introduce a task-specific memory module to store support set information and construct an imitation module to force query sets to imitate the behaviors of some representative supportset samples stored in the memory.A theoretical analysis is provided to prove the effectiveness of our method, and empirical results also demonstrate that our method outperforms competitive baselines on both text classification and generation tasks.

Keywords:
Overfitting Construct (python library) Task (project management) Set (abstract data type) Memorization Initialization Natural language Imitation Language model

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.30
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Machine Learning and Data Classification
Physical Sciences →  Computer Science →  Artificial Intelligence
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.