Abstract

Unsupervised deep learning techniques are widely used to identify anomalous behaviour.The performance of such methods is a product of the amount of training data and the model size.However, the size is often a limiting factor for the deployment on resource-constrained devices.We present a novel procedure based on knowledge distillation for compressing an unsupervised anomaly detection model into a supervised deployable one and we suggest a set of techniques to improve the detection sensitivity.Compressed models perform comparably to their larger counterparts while significantly reducing the size and memory footprint.

Keywords:
Computer science Anomaly detection Distillation Memory footprint Limiting Software deployment Unsupervised learning Artificial intelligence Machine learning Set (abstract data type) Footprint Sensitivity (control systems) Data mining Pattern recognition (psychology) Engineering

Metrics

1
Cited By
0.26
FWCI (Field Weighted Citation Impact)
17
Refs
0.57
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Network Security and Intrusion Detection
Physical Sciences →  Computer Science →  Computer Networks and Communications
Time Series Analysis and Forecasting
Physical Sciences →  Computer Science →  Signal Processing

Related Documents

JOURNAL ARTICLE

Heterogeneous Knowledge Distillation for Anomaly Detection

Longjiang WuJiali Zhou

Journal:   IEEE Access Year: 2024 Vol: 12 Pages: 161490-161499
BOOK-CHAPTER

Relation-Based Knowledge Distillation for Anomaly Detection

Hekai ChengLu YangZulong Liu

Lecture notes in computer science Year: 2021 Pages: 105-116
JOURNAL ARTICLE

Multitask Hybrid Knowledge Distillation for Unsupervised Anomaly Detection

Muhao XuCuiping ZhuGuang FengSijie Niu

Journal:   IEEE Transactions on Industrial Informatics Year: 2025 Vol: 21 (7)Pages: 5666-5676
© 2026 ScienceGate Book Chapters — All rights reserved.