Christophe MarsalaBernadette Bouchon‐Meunier
While fuzzy methods, and in particular fuzzy rule-based methods, have been pointed out as explainable, it is not always easy to attach a linguistic label to the conclusion provided by a rule-based system for a given observation. In this paper, we focus on the case of sparse rules, with imprecise or linguistic premises and conclusions, and their use with imprecise or linguistic observations. We explore fuzzy solutions of interpolative reasoning based on analogies, with regard to desirable mathematical properties and explainability criteria. We first recall such criteria existing in the state of the art and we analyse them in the light of explainable Artificial Intelligence (AI) requirements. We then propose a new method making easier to explain both the result of the fuzzy interpolative reasoning and the approach used to construct it. A set of experimental comparisons with some existing fuzzy interpolative reasoning approaches is presented.
M. SetnesUzay KaymakH.R. van Nauta LemkeH.B. Verbruggen
Dongmei HuangEric C.C. TsangWing W. Y. Ng
Yamin LiDongmei HuangTsangLi Na Zhang