JOURNAL ARTICLE

Assessing Hallucination in Large Language Models Under Adversarial Attacks

Keywords:
Adversarial system Computer science Language model Computer security Natural language processing Artificial intelligence

Metrics

2
Cited By
1.28
FWCI (Field Weighted Citation Impact)
14
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Epilepsy research and treatment
Health Sciences →  Medicine →  Psychiatry and Mental health
© 2026 ScienceGate Book Chapters — All rights reserved.