Generative modeling is a powerful technique that involves creating machine learning models capable of creating new data similar to the data it was trained on. Generative Adversarial Networks (GANs) are a leading approach for generative modeling. However, GAN training is known to be a notoriously difficult task. GAN convergence issues are largely caused by the supports of the real and generated distributions being disjoint. To tackle this open problem, we propose a novel GAN pre-training process that effectively aligns the supports of the generated and real data prior to applying traditional adversarial GAN training. The key component of our method, called AlignGAN, is learning a mapping between the input data distribution and a latent representation defined over a hypersphere, regularized by a One Class Classifier. This successfully encourages the generator to produce samples throughout the support of the real data, while not generating samples outside the support. We maintain support alignment through low-bandwidth noise convolutions and additional One Class regularization, leading to continued stable GAN training. We validate our approach against leading stabilization methods on three benchmark datasets, showing AlignGAN routinely produces the best results.
Kevin A. RothAurélien LucchiSebastian NowozinThomas Hofmann
Zihao LiYuan ZhouZhiyuan WangMinne Li
Lianping YangHao SunJian ZhangSijia MoWuming JiangXiangde Zhang
Afia SajeedaB M Mainul Hossain
Shufei ZhangQian ZhuangKaizhu HuangRui ZhangAmir Hussain