Detecting anomalies in time series data is challenging due to their complex and volatile temporal features. Some anomalies only show deviating patterns to their local context instead of the overall distribution. Additionally, the biased sample distribution between normal and abnormal classes hinders the efficient usage of the available data labels. Self-supervised approaches are practically efficient for anomaly detection, in which only normal data is used during the training. However, they often fail to detect contextual anomalies in high-dimensional time series data, while the representation learning of such complex data patterns is sub-optimal. This paper introduces ContrastAD, a novel self-supervised framework for time series anomaly detection. Specifically, we employ the contrastive learning process with anomaly-induced temporal transformations. Targeting the point and contextual anomalies appearing in time series data, we develop corresponding transformations to enforce the model to learn discrepant representations for normal and abnormal data in the latent space. With extensive experiments, we show that our approach outperforms baseline anomaly detectors on various benchmark datasets. Our empirical results indicate that ContrastAD improves anomaly detection performance on noisy and high-dimensional time series datasets, even without common repeating patterns.
HyunGi KimSiwon KimSeonwoo MinByunghan Lee
Rui WangChongwei LiuXudong MouKai GaoXiaohui GuoLiu PinTianyu WoXudong Liu
Joël Roman KyBertrand MathieuAbdelkader LahmadiRaouf Boutaba