This study delves into the evaluation and optimization of transformer-based models for question-answering systems, focusing on health-related inquiries. Utilizing a specialized dataset extracted from Wikipedia articles, transformer models, namely Bert-base-cased, Electra-base, Deberta-base, Xlm-roberta-base, Distilbert-base, and Albert-base, were scrutinized based on their F1 scores and exact match accuracy. Electra-base and Deberta-base exhibited notable performance, showcasing the significance of models equipped with denoising mechanisms and disentangled attention. The outcomes highlight the critical role of tailored model selection in specific domains, particularly within health-related contexts. Future research avenues may explore fine-tuning strategies and optimizations for health datasets, addressing challenges in medical information extraction and question-answering. This study contributes valuable insights to the natural language processing field, guiding advancements in transformer-based question-answering systems, especially in the health domain.
Zhicheng HeYuanzhi LiDingming Zhang
Sonam DamaniKedhar Nath NarahariAnkush ChatterjeeManish GuptaPuneet Agrawal
Reji Rahmath KReghu Raj P. C.Rafeeque P.C