Wen ShaoRei KawakamiTakeshi Naemura
Previous studies on anomaly detection in videos have trained detectors in which reconstruction and prediction tasks are performed on normal data so that frames on which their task performance is low will be detected as anomalies during testing. This paper proposes a new approach that involves sorting video clips, by using a generative network structure. Our approach learns spatial contexts from appearances and temporal contexts from the order relationship of the frames. Experiments were conducted on four datasets, and we categorized the anomalous sequences by appearance and motion. Evaluations were conducted not only on each total dataset but also on each of the categories. Our method improved detection performance on both anomalies with different appearance and different motion from normality. Moreover, combining our approach with a prediction method produced improvements in precision at a high recall.
Chao HuLiqiang ZhuShengxin Lai
Dan XuXinyu WuDezhen SongNannan LiYen-Lun Chen
GU Ping, QIU Jiatao, LUO Changjiang, ZHANG Zhipeng
Yiru ZhaoBing DengChen ShenYao LiuHongtao LuXian‐Sheng Hua
Yunpeng ChangZhigang TuWei XieBin LuoShifu ZhangHaigang SuiJunsong Yuan