There has been a longstanding belief that generation can facilitate a true understanding of visual data. In line with this, we revisit generatively pre-training visual representations in light of recent interest in denoising diffusion models. While directly pre-training with diffusion models does not produce strong representations, we condition diffusion models on masked input and formulate diffusion models as masked autoencoders (DiffMAE). Our approach is capable of (i) serving as a strong initialization for downstream recognition tasks, (ii) conducting high-quality image inpainting, and (iii) being effortlessly extended to video where it produces state-of-the-art classification accuracy. We further perform a comprehensive study on the pros and cons of design choices and build connections between diffusion models and masked autoencoders. Project page.
Yanlong LiChamara MadarasinghaKanchana Thilakarathna
Lingjing KongQuintín Martín MartínGuangyi ChenEric P. XingYuejie ChiLouis–Philippe MorencyKun Zhang
Mariana-Iuliana GeorgescuEduardo FonsecaRadu Tudor IonescuMario LučićCordelia SchmidAnurag Arnab
Karl Raphael L. ParadezaRaphael AlampayPatricia Angela R. Abu