Abstract

Generative modeling is a powerful technique that involves creating machine learning models capable of creating new data similar to the data it was trained on. Generative Adversarial Networks (GANs) are a leading approach for generative modeling. However, GAN training is known to be a notoriously difficult task. GAN convergence issues are largely caused by the supports of the real and generated distributions being disjoint. To tackle this open problem, we propose a novel GAN pre-training process that effectively aligns the supports of the generated and real data prior to applying traditional adversarial GAN training. The key component of our method, called AlignGAN, is learning a mapping between the input data distribution and a latent representation defined over a hypersphere, regularized by a One Class Classifier. This successfully encourages the generator to produce samples throughout the support of the real data, while not generating samples outside the support. We maintain support alignment through low-bandwidth noise convolutions and additional One Class regularization, leading to continued stable GAN training. We validate our approach against leading stabilization methods on three benchmark datasets, showing AlignGAN routinely produces the best results.

Keywords:
Computer science Machine learning Generative grammar Artificial intelligence Classifier (UML) Adversarial system Disjoint sets Regularization (linguistics)

Metrics

1
Cited By
0.18
FWCI (Field Weighted Citation Impact)
74
Refs
0.47
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Generative Adversarial Networks and Image Synthesis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Model Reduction and Neural Networks
Physical Sciences →  Physics and Astronomy →  Statistical and Nonlinear Physics
Music and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
© 2026 ScienceGate Book Chapters — All rights reserved.