Deep learning frameworks have become increasingly popular in brain-computer interface (BCI) study thanks to their outstanding performance. However, in terms of the classification model alone, they are treated as black boxes as they do not provide any information on what led them to reach a particular decision. In other words, we cannot convince whether the neuro-physiological factor or simply noise is the factor of high performance. Because of this disadvantage, it is difficult to ensure adequate reliability compared to their high performance. In this study, we propose an explainable deep learning model aimed at classifying EEG signal which is obtained from the motor-imagery (MI) task. Layer-wise relevance propagation (LRP) was adopted on the model to interpret the reason that the model derived certain classification output. We visualized the heatmap which indicates the output of the LRP in the form of topography to certify neuro-physiological factors. Furthermore, we classified EEG in the subject-independent manner to learn robust and generalized EEG features by avoiding subject dependency. The methodology also provides the advantage of avoiding the expense of building training data for each subject. With our proposed model, we obtained generalized heatmap patterns for all subjects. As a result, we can conclude that our proposed model provides neuro-physiologically reliable interpretation.
Zhenis OtarbayАбзал Кызырканов
Karel RootsYar MuhammadNaveed Muhammad
Siwei LiuJia ZhangAndong WangHanrui WuQibin ZhaoJinyi Long
Parvan MiladAmir Rikhtehgar GhiasiTohid Yousefi RezaiiAli Farzamnia