JOURNAL ARTICLE

Explainable Multivariate Time Series Classification

Abstract

Many real-world applications, e.g., healthcare, present multi-variate time series prediction problems. In such settings, in addition to the predictive accuracy of the models, model transparency and explainability are paramount. We consider the problem of building explainable classifiers from multi-variate time series data. A key criterion to understand such predictive models involves elucidating and quantifying the contribution of time varying input variables to the classification. Hence, we introduce a novel, modular, convolution-based feature extraction and attention mechanism that simultaneously identifies the variables as well as time intervals which determine the classifier output. We present results of extensive experiments with several benchmark data sets that show that the proposed method outperforms the state-of-the-art baseline methods on multi-variate time series classification task. The results of our case studies demonstrate that the variables and time intervals identified by the proposed method make sense relative to available domain knowledge.

Keywords:
Computer science Benchmark (surveying) Artificial intelligence Multivariate statistics Classifier (UML) Time series Machine learning Convolution (computer science) Data mining Feature extraction Pattern recognition (psychology) Artificial neural network

Metrics

47
Cited By
5.90
FWCI (Field Weighted Citation Impact)
72
Refs
0.97
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Time Series Analysis and Forecasting
Physical Sciences →  Computer Science →  Signal Processing
Stock Market Forecasting Methods
Social Sciences →  Decision Sciences →  Management Science and Operations Research
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.