Te ZhangChristian WagnerJonathan M. Garibaldi
EXplainable Artificial Intelligence (XAI) is of in-creasing importance as researchers and practitioners seek better transparency and verifiability of AI systems. Mamdani fuzzy systems can provide explanations based on their linguistic rules, and thus a potential pathway to XAI. A factual rule based explanation generally refers to the given set of rules executed, or fired, for a given input. However, research has shown that human explanations are often counterfactual (CF), i.e. rather than explaining why a given output was reached, they show why other potential outputs were not. Although several machine learning-based CF explanation generation methods have been proposed in recent years, quasi none of them focus on fuzzy systems. Also, where they do, they focus on correlation, which limits the interpretive value of any CF explanations obtained as humans expect a causal relationship in rules, i.e. we are cause-effect thinkers. In this paper, we propose a new rule generation framework for Mamdani fuzzy classification systems, which we refer to as CF-MABLAR, building on the MARkov BLAnket Rules (MABLAR) framework. CF-MABLAR approximates the causal links between inputs and output(s) of fuzzy systems and generates CF rules by leveraging them. Uniquely, the CF rules obtained not only provide a basic CF explanation, but can also articulate how the given inputs would need to be changed to generate a different output, crucial for lay-user insight, verification and sensitivity-evaluation of XAI systems, for example in decision support around credit risk, cyber security and medical assistance.
Hisao IshibuchiTakashi Yamamoto
Kenji NozakiHisao IshibuchiHideo Tanaka
Hisao IshibuchiTomoharu Nakashima
Takashi YamamotoHisao Ishibuchi
Hisao IshibuchiTomoharu Nakashima