BOOK-CHAPTER

Adversarial Pretrained Language Model for Multivariate Time Series Anomaly Detection

Abstract

Multivariate time series anomaly detection plays a vital role in safety-critical domains such as industrial systems, finance, and cybersecurity. However, the scarcity of labeled anomalies poses significant challenges for learning robust normal patterns, often blurring the boundary between normal and abnormal behaviors. To address this challenge, we propose ADLM, an unsupervised adversarial framework that integrates a Language-Model-based Predictor for Time Series (LMPTS) with an autoencoder. To capture normal patterns under limited data, LMPTS repurposes a decoder-only pretrained language model as an autoregressive forecaster, leveraging its strong generative prior to capture temporal dependencies. To model complex cross-sensor dependencies, we incorporate graph structure learning into the framework. Furthermore, we introduce an adversarial training strategy to sharpen the model’s normal-pattern representations and amplify deviations indicative of anomalies. Experiments on six public datasets show that ADLM consistently outperforms state-of-the-art baselines and remains robust under severe data scarcity. By coupling decoder-only language models with an adversarial objective, ADLM offers a label-efficient, structure-aware solution to multivariate time series anomaly detection.

Keywords:

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.58
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Time Series Analysis and Forecasting
Physical Sciences →  Computer Science →  Signal Processing
Network Security and Intrusion Detection
Physical Sciences →  Computer Science →  Computer Networks and Communications

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.