Manipulated videos often contain subtle inconsistencies between their visual and audio signals. We propose a video forensics method, based on anomaly detection, that can identify these inconsistencies, and that can be trained solely using real, unlabeled data. We train an autoregressive model to generate sequences of audio-visual features, using feature sets that capture the temporal synchronization between video frames and sound. At test time, we then flag videos that the model assigns low probability. Despite being trained entirely on real videos, our model obtains strong performance on the task of detecting manipulated speech videos. Project site: https://cfeng16.github.io/audio-visual-forensics.
Jingke MengHuilin TianJian–Fang HuJian-Fang HuWei‐Shi Zheng
Alexander BauerShinichi NakajimaKlaus‐Robert Müller
Zhen YangGuodong WangYuanfang GuoXiuguo BaoDi Huang
Jhih-Ciang WuHe‐Yen HsiehDing-Jie ChenChiou‐Shann FuhTyng-Luh Liu
YeongHyeon ParkSungho KangMyung Jin KimYeonho LeeHyeong Seok KimJuneho Yi