Ritul Phukan, Monalisa Daimari, Anupam Kharghoria, Biman Basumatary
Low-resource languageslanguages with limited annotated corpora, lexicons, and digital resourcespose major challenges for modern natural language processing (NLP). Recent progress in transfer learning, multilingual pretraining, parameter-efficient adaptation, data augmentation, and community-driven dataset creation has substantially improved capabilities for many such languages, yet large performance gaps remain compared to high-resource languages. This article surveys the technical advances that enable NLP for low-resource languages (including unsupervised and weakly supervised methods, multilingual and massively multilingual models, few-shot and in-context learning with large language models, and adapter/LoRA-style parameter-efficient fine-tuning). We examine practical pipelines for tasks such as machine translation, speech recognition, OCR, and information extraction; describe prominent dataset and community projects; summarize typical evaluation strategies and their pitfalls; and outline promising research directions (community data collection, privacy-preserving methods, on-device adaptation, and ethics-aware deployments). The review highlights approaches that balance performance, compute cost, and data-efficiency, and recommends research and deployment practices to accelerate inclusive language technology.
Ritul Phukan, Monalisa Daimari, Anupam Kharghoria, Biman Basumatary
Partha PakrayAlexander GelbukhSivaji Bandyopadhyay
Partha PakrayAlexander GelbukhSivaji Bandyopadhyay
Varsha NaikK. RajeswariKshitij JadhavAniket Rahalkar