We present DiffusionBERT, a new generative masked language model based on discrete dif- fusion models. Diffusion models and many pre- trained language models have a shared training objective, i.e., denoising, making it possible to combine the two powerful models and enjoy the best of both worlds. On the one hand, dif- fusion models offer a promising training strat- egy that helps improve the generation quality. On the other hand, pre-trained denoising lan- guage models (e.g., BERT) can be used as a good initialization that accelerates convergence. We explore training BERT to learn the reverse process of a discrete diffusion process with an absorbing state and elucidate several designs to improve it. First, we propose a new noise schedule for the forward diffusion process that controls the degree of noise added at each step based on the information of each token. Sec- ond, we investigate several designs of incorpo- rating the time step into BERT. Experiments on unconditional text generation demonstrate that DiffusionBERT achieves significant improve- ment over existing diffusion models for text (e.g., D3PM and Diffusion-LM) and previous generative masked language models in terms of perplexity and BLEU score. Promising re- sults in conditional generation tasks show that DiffusionBERT can generate texts of compa- rable quality and more diverse than a series of established baselines.
Tejomay Kishor PadoleSuyash P. AwatePushpak Bhattacharyya
Gayane ChilingaryanHovhannes TamoyanAni TevosyanNelly BabayanKaren HambardzumyanZaven NavoyanArmen AghajanyanHrant KhachatrianLusine Khondkaryan
Marianne ArriolaJustin ChiuAaron GokaslanVolodymyr KuleshovEdgar MarroquinAlexander M. RushSubham SahooYair Schiff
Chengen WangMurat Kantarcıoğlu