The process of automatically detecting abnormal video patterns in the intelligent surveillance framework is known as video anomaly detection. However, video anomaly detection is challenging due to inherent research challenges such as equivocal nature, data imbalances, data scarcity, the complex nature of the entities involved in the anomaly, etc. Hence, a self-attention-enabled convolutional spatiotemporal autoencoder is proposed to detect video anomalies efficiently. The proposed Self-Attention-enabled Convolutional Long-Short-Term-Memory Auto-Encoder (SA-ConvLSTM2D- AE)-based video anomaly detector is comprised of three sequential stages: spatial encoder to learn spatial (appearance) features of individual frames, temporal encode-decoder to learn temporal (motion) features of encoded spatial features, and spatial decoder to decode the encoded spatial features for reconstructing the individual frames. Here, the self-attention mechanism is embedded into the convolutional Long Short Term Memory block present in the temporal encoder-decoder section to generate the Spatial-Attention-enabled ConvLSTM block for learning better spatiotemporal features. An efficient threshold selection criteria based on the finding of the optimized Geometric mean value of the sensitivity and specificity from the Receiver Operating Characteristics curve is implemented. The model is trained on only the video frame sequences corresponding to the normal incidents. However, the model poorly reconstructed test frame sequences with video anomalies, as anomalous samples are never exposed during training. Hence, when the anomaly score of individual frames exceeds the selected optimum threshold level, then an anomaly is said to be detected.
Rashmiranjan NayakUmesh Chandra PatiSantos Kumar Das
Hemant DholeMukul SutaoneVibha Vyas
Sandhya Rani SahooJaideep KokkiligaddaRatnakar Dash
Jiafei LiangTing LiJiaqi YangYanan LiZhiwen FangFeng Yang
K. DeepakS. ChandrakalaC. Krishna Mohan