Jianhuan MaoMengxiao ZhuLei LiHaogang Zhu
Multivariate time series anomaly detection plays a vital role in safety-critical domains such as industrial systems, finance, and cybersecurity. However, the scarcity of labeled anomalies poses significant challenges for learning robust normal patterns, often blurring the boundary between normal and abnormal behaviors. To address this challenge, we propose ADLM, an unsupervised adversarial framework that integrates a Language-Model-based Predictor for Time Series (LMPTS) with an autoencoder. To capture normal patterns under limited data, LMPTS repurposes a decoder-only pretrained language model as an autoregressive forecaster, leveraging its strong generative prior to capture temporal dependencies. To model complex cross-sensor dependencies, we incorporate graph structure learning into the framework. Furthermore, we introduce an adversarial training strategy to sharpen the model’s normal-pattern representations and amplify deviations indicative of anomalies. Experiments on six public datasets show that ADLM consistently outperforms state-of-the-art baselines and remains robust under severe data scarcity. By coupling decoder-only language models with an adversarial objective, ADLM offers a label-efficient, structure-aware solution to multivariate time series anomaly detection.
Xu WangQisheng XuKele XuTing YuBo DingDawei FengYong Dou
Xinying YuKejun ZhangYaqi LiuBing ZouJun WangWenbin WangRong Qian
ZHANG Renbin, ZUO Yicong, ZHOU Zelin, WANG Long, CUI Yuhang
Bolong ZhengLingfeng MingKai ZengMengtao ZhouXinyong ZhangTao YeBin YangXiaofang ZhouChristian S. Jensen