Diffusion models represent the latest state-of-the-art in the domain of deep generative models, boasting remarkable performance across a broad spectrum of applications. Despite the widespread success of diffusion models in various tasks, the original formulations of these models exhibit notable limitations. The article uses DDPM as an example, thoroughly and deeply exploring and deriving the mathematical principles of the model from two different perspectives. Additionally, this article explores the relationship between diffusion models and five other types of generative models: Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Autoregressive models, Normalizing flows, and Energy-based models. Concluding with open questions for future research, the paper offers insights into the prospective algorithmic and application-oriented developments of diffusion models. Diffusion models have become a powerful framework capable of competing with Generative Adversarial Networks (GANs) in most applications without resorting to adversarial training. For specific tasks, understanding why and when diffusion models are more effective than other networks, and comprehending the differences between diffusion models and other generative models, will help clarify why diffusion models can produce high-quality samples with high likelihood.
Jiahang CaoZiqing WangHanzhong GuoHao ChengQiang ZhangRenjing Xu
Huan TengYuhui QuanChengyu WangJun HuangHui Ji
Shuohang YangJian GaoJiayi ZhangChao Xu
Esteban Hernandez CapelJonathan Dumas