Many real-world applications, e.g., healthcare, present multi-variate time series prediction problems. In such settings, in addition to the predictive accuracy of the models, model transparency and explainability are paramount. We consider the problem of building explainable classifiers from multi-variate time series data. A key criterion to understand such predictive models involves elucidating and quantifying the contribution of time varying input variables to the classification. Hence, we introduce a novel, modular, convolution-based feature extraction and attention mechanism that simultaneously identifies the variables as well as time intervals which determine the classifier output. We present results of extensive experiments with several benchmark data sets that show that the proposed method outperforms the state-of-the-art baseline methods on multi-variate time series classification task. The results of our case studies demonstrate that the variables and time intervals identified by the proposed method make sense relative to available domain knowledge.
Kevin FauvelTao LinVéronique MassonÉlisa FromontAlexandre Termier
Sadra Naddaf-ShMahdi Naddaf-ShMaxim DaltonSoodabeh RamezaniAmir R. KashaniHassan Zargarzadeh
Wei CaiXiaomin ZhuKaiyuan BaiAihui YeRuntong Zhang
Kevin FauvelÉlisa FromontVéronique MassonPhilippe FaverdinAlexandre Termier