Deep generative models are deep learning-based methods that are optimized to synthesize samples of a given distribution. During the past years, they have attracted a lot of interest from the research community, and the developed tools now enjoy many practical applications in content creation and editing. In computer vision, such models are typically built for images, videos, and 3D objects. Recently, there has emerged a paradigm of neural fields, which unifies the representations of such types of data by parametrizing them via neural networks. In this work, we develop generative modeling methods for images, videos, and 3D objects which treat the underlying data in such a form. We show that this perspective can yield state-of-the-art synthesis quality and many useful practical benefits, like interpolation/extrapolation capabilities, geometric inductive biases, and more efficient training and inference.
Biao ZhangJiapeng TangMatthias NießnerPeter Wonka
Xin‐Yang LiuMeet Hemant ParikhXiantao FanPan DuMengqing WangYi-Fan ChenJianxun Wang
Amir BardaVladimir G. KimNoam AigermanAmit H. BermanoThibault Groueix