JOURNAL ARTICLE

Enhancing Generative AI Capabilities Through Retrieval-Augmented Generation Systems and LLMs

Ankit BansalSwathi Suddala

Year: 2024 Journal:   Library Progress (International) Vol: 44 (03)Pages: 17776-17787

Abstract

Legible language models, or LLMs, are poised to enable a broad range of new programming, feedback, scripting, and automated testing systems. However, recent accuracy and precision critiques highlight several near-term limitations of generative AI frameworks. To build a first-generation production version of retrieval-augmented generation systems that enhance information access through writing, we expected improvements to robustness, accuracy, and the overall performance of today’s best LLMs, and rapid deployment of API integrations, interfaces, and workflows. As major data centers pushed hardware and infrastructure clouds started software scaling races, details of many ways to achieve improved near-term capabilities appeared, including ultra-large language models with probabilistic reasoning and factored representations, being introduced at this workshop. These emerging extensions form the basis for RAG system improvements. Ongoing research and development in other areas covers the hardware, software, and neural model design and training needs of programs, which will soon incorporate features into hybrid cloud production AI systems.

Keywords:
Generative grammar Computer science Artificial intelligence

Metrics

1
Cited By
0.64
FWCI (Field Weighted Citation Impact)
0
Refs
0.71
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Semantic Web and Ontologies
Physical Sciences →  Computer Science →  Artificial Intelligence
AI-based Problem Solving and Planning
Physical Sciences →  Computer Science →  Artificial Intelligence
Data Mining Algorithms and Applications
Physical Sciences →  Computer Science →  Information Systems
© 2026 ScienceGate Book Chapters — All rights reserved.