JOURNAL ARTICLE

Retrieval-Augmented Generation and Hallucination in Large Language Models: A Scholarly Overview

Sagar Gupta

Year: 2025 Journal:   Scholars Journal of Engineering and Technology Vol: 13 (05)Pages: 328-330

Abstract

Large Language Models (LLMs) have revolutionized natural language processing tasks, yet they often suffer from "hallucination” the confident generation of factually incorrect information. Retrieval-Augmented Generation (RAG) has emerged as a promising technique to mitigate hallucinations by grounding model responses in external documents. This article explores the underlying causes of hallucinations in LLMs, the mechanisms and architectures of RAG systems, their effectiveness in reducing hallucinations, and ongoing challenges. We conclude with a discussion of future directions for integrating retrieval mechanisms more seamlessly into generative architecture.

Keywords:
Linguistics Psychology Computer science History Natural language processing Philosophy

Metrics

2
Cited By
9.64
FWCI (Field Weighted Citation Impact)
0
Refs
0.97
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Machine Learning in Healthcare
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.