JOURNAL ARTICLE

Generative Local Interpretable Model-Agnostic Explanations

Mohammad NagahisarchoghaeiMirhossein Mousavi KarimiShahram RahimiLogan CumminsGhodsieh Ghanbari

Year: 2023 Journal:   Proceedings of the ... International Florida Artificial Intelligence Research Society Conference Vol: 36   Publisher: George A. Smathers Libraries

Abstract

The use of AI and machine learning models in the industry is rapidly increasing. Because of this growth and the noticeable performance of these models, more mission-critical decision-making intelligent systems have been developed. Despite their success, when used for decision-making, AI solutions have a significant drawback: transparency. The lack of transparency behind their behaviors, particularly in complex state-of-the-art machine learning algorithms, leaves users with little understanding of how these models make specific decisions. To address this issue, algorithms such as LIME and SHAP (Kernel SHAP) have been introduced. These algorithms aim to explain AI models by generating data samples around an intended test instance by perturbing the various features. This process has the drawback of potentially generating invalid data points outside of the data domain. In this paper, we aim to improve LIME and SHAP by using a pre-trained Variational AutoEncoder (VAE) on the training dataset to generate realistic data around the test instance. We also employ a sensitivity feature importance with Boltzmann distribution to aid in explaining the behavior of the black-box model surrounding the intended test instance.

Keywords:
Computer science Machine learning Autoencoder Artificial intelligence Transparency (behavior) Generative model Process (computing) Test data Kernel (algebra) Feature (linguistics) Generative grammar Deep learning Mathematics

Metrics

5
Cited By
0.72
FWCI (Field Weighted Citation Impact)
17
Refs
0.61
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Explainable Artificial Intelligence (XAI)
Physical Sciences →  Computer Science →  Artificial Intelligence
Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Improving Local Interpretable Model-agnostic Explanations Stability

Journal:   International journal of intelligent engineering and systems Year: 2024 Vol: 17 (6)Pages: 1099-1108
BOOK-CHAPTER

Causality-Aware Local Interpretable Model-Agnostic Explanations

Martina CinquiniRiccardo Guidotti

Communications in computer and information science Year: 2024 Pages: 108-124
© 2026 ScienceGate Book Chapters — All rights reserved.