Chenkai ZhangYueming WangWenming Tan
It is still challenging to detect and locate anomalies by models trained only with normal samples. Methods using image reconstruction as a pretext task can provide precise localization but suffer from harnessing the reconstruction capability on unseen anomalies. This paper proposes a new framework of Multi-Task and Hard example Mining (MTHM) for anomaly detection and localization. The self-supervised multi-task setting creatively takes advantage of the competition among different tasks to learn more compact and efficient representations for detection tasks. Moreover, introducing other semantic tasks allows the shared encoder to learn beyond the pixel-to-pixel mapping of only a single image reconstruction task. Subsequent analysis experiments demonstrate that the proposed method can achieve a more suppressive reconstruction capability for anomalies. During the test process, the outputs of the other tasks can also provide valuable information for anomaly detection and localization. Furthermore, in combination with a novel hard example mining strategy, the byproducts of the image reconstruction task are inexpensively exploited as hard-to-detect samples for enhancing models' detection ability. And as the model capability increases during training, the detection difficulty of these samples is able to increase adaptively. Our experiments show that hard samples generated in the later training stages can better approximate the real data distribution. With the help of the multi-task framework and hard example mining strategy, our method surpasses many state-of-the-art methods.
Chen QiuMarius KloftStephan MandtMaja Rudolph
Borut BatageljAndrej KronovšekVitomir ŠtrucPeter Peer
Jaemin YooLingxiao ZhaoLeman Akoglu
Zhichao WuXin YangXiaopeng WeiPeijun YuanYuanping ZhangJ.Y. Bai
Yizhou WangCan QinRongzhe WeiYi XuYue BaiYun Fu