As machine learning models are increasingly used in a wide range of applications, there is growing concern about the challenges involved in understanding their predictions. The field of interpretability/explainability of artificial intelligences has developed several approaches and tools that aim at improving the understanding of such systems. These tools tend to focus on the knowledgeable data scientist as their main user. The tools usually produce plots, charts or another graphical representations (such as superposition of color on an image or text); thus the user must have some technical background so as to consume the information. This work developed techniques that generate a textual explanation for the internal behavior of a given classifier, aiming at users of machine learning with limited technical proficiency. A package for textual explanation generation, called NaLax, was built and tested with users. Preliminary results were published and presented at the IEEE International Conference of Artificial Intelligence and Knowledge Engineering (AIKE) in 2019.
Rodrigo Monteiro de AquinoFábio Gagliardi Cozman
Jasmin JarsaniaM. P. SinghFNU Harsh
Sai P. SelvarajManuela VelosoStephanie Rosenthal
Albert T. CorbettAngela Z. WagnerSharon LesgoldHarry UlrichScott Stevens
Oana-Maria CamburuTim RocktäschelThomas LukasiewiczPhil Blunsom