In recent years, Convolutional Neural Networks (CNNs) have been widely applied in various applications due to its powerful learning capability. However, its lack of explainability hinders its further usage in tasks requiring high reliability. Therefore, interpretability technique is the key to the application and deployment of CNN models. As a typical interpretability technique for CNN, Class Activation Map (CAM) utilizing the gradient based weights and activation map is widely applied to traditional CNN models for offering visual interpretability. However, the activation map adopted by CAM cannot loyally quantify the relevance between input samples and activation values. Hence, in this paper, we propose a new interpretability approach called Salience-CAM employing salience scores to accurately measure the relevance between input samples and activation values. To evaluate the effectiveness of Salience-CAM, comprehensive experiments are conducted on 6 selected time series datasets. By leveraging an evaluation algorithm proposed in this paper, the experimental results show that our proposed Salience-CAM outperforms the baseline by discovering more discriminative features.
Linjiang ZhouChao MaXiaochuan ShiDian ZhangWei LiLibing Wu
Haofan WangZifan WangMengnan DuFan YangZijian ZhangSirui DingPiotr MardzielXia Hu
Minh-Quan NguyenD. D. PhamTrong-Nghia Pham
Mohammed Bany MuhammadMohammed Yeasin
Zhenyu YuShiyu DaiYuxiang Xing