Explainable AI (XAI) is the study on how humans can be able to understand the\ncause of a model's prediction. In this work, the problem of interest is Scene\nText Recognition (STR) Explainability, using XAI to understand the cause of an\nSTR model's prediction. Recent XAI literatures on STR only provide a simple\nanalysis and do not fully explore other XAI methods. In this study, we\nspecifically work on data explainability frameworks, called attribution-based\nmethods, that explain the important parts of an input data in deep learning\nmodels. However, integrating them into STR produces inconsistent and\nineffective explanations, because they only explain the model in the global\ncontext. To solve this problem, we propose a new method, STRExp, to take into\nconsideration the local explanations, i.e. the individual character prediction\nexplanations. This is then benchmarked across different attribution-based\nmethods on different STR datasets and evaluated across different STR models.\n
San-Deul KangJoong‐won HwangHeechul JungDongyoon HanSungdae SimJunmo Kim
Dong ZhangDa‐Han WangHanzi Wang
Yew Lee TanErnest Yu Kai ChewAdams Wai‐Kin KongJung‐Jae KimJoo‐Hwee Lim