With the increasing public awareness of health and safety, there is a growing demand for intelligent systems capable of providing precise answers to medical and health-related questions. However, Large Language Models (LLMs) are prone to "hallucination" phenomena in medical domain responses, generating seemingly plausible but actually inaccurate content, which could lead to serious consequences in medical scenarios. To address this challenge, this study proposes a knowledge graph-based retrieval-augmented question answering framework, with a specific focus on the dermatology domain. First, this study constructed a medical knowledge graph ontology for dermatology, encompassing key dimensions such as disease definitions, symptoms, diagnoses, and treatments. Second, a method was designed utilizing the large language model GLM-4 (General Language Model-4) for automated knowledge extraction from medical guidelines to construct a domain-specific knowledge graph. Third, this study introduced fuzzy entity recognition and knowledge graph enhancement mechanisms, which can identify key entities in questions and retrieve relevant knowledge from the graph to augment the original queries. Furthermore, experiments demonstrate that our approach effectively reduces hallucinations in LLM-generated medical responses.On the test set, a comparison with the baseline method reveals that our proposed framework achieves superior performance, with the average BLEU score increasing from 0.0102 to 0.0157, and the average BERT_SCORE_P, R, and F1 scores improving from 0.5037, 0.7120, and 0.5897 to 0.5273, 0.7346, and 0.6135, respectively.These results indicate significant accuracy improvements, particularly in diagnostic recommendations and treatment plans.This methodology provides a new paradigm for building safer and more reliable medical intelligent systems and can be extended to other specialized medical domains.
Lifan HanXin WangZhao LiHeyi ZhangZirui Chen