Retrieval-Augmented Generation (RAG) has emerged as a promising approach to improve the faithfulness and reliability of large language models (LLMs) by grounding their outputs in external knowledge. However, the opacity of LLMs and the lack of interpretability in RAG pipelines limit user trust, especially in high-stakes domains. This survey analyzes three recent contributions to explainable RAG: RAG-Ex (a generic model-agnostic explainer for LLM-based RAG systems), a neuro-symbolic RAG framework for predicting road users’ behaviors in autonomous driving, and a RAG-based explainable LLM pipeline for automatic job safety reporting. We compare their motivations, methods, findings, and implications, highlighting common challenges and research opportunities.
Aniket MishraAniket GuptaAnil Kumar Sagar