JOURNAL ARTICLE

Self-supervised learning with consistency loss for improving GANs

Jie GaoDandan Song

Year: 2022 Journal:   International Conference on Mechanisms and Robotics (ICMAR 2022) Vol: 114 Pages: 41-41

Abstract

After much research and advancements, GANs have achieved great success but still face many challenges. In this paper, we adopt self-supervised learning based on rotation angles to overcome the catastrophic forgetting of the discriminator. Self-supervision encourages the discriminator to learn meaningful feature representations that are not forgotten during training. Meanwhile, this paper adopts consistent adversarial training to alleviate the mode collapse of the generator. The consistency constraint condition encourages the discriminator to explore more features, which helps the generator achieve more significant improvement space. This deep generative model improves unsupervised image generation tasks by simultaneously alleviating two critical issues in GANs. Experimental results demonstrate that our model achieves competitive scores.

Keywords:
Discriminator Computer science Consistency (knowledge bases) Forgetting Generator (circuit theory) Constraint (computer-aided design) Artificial intelligence Generative grammar Adversarial system Face (sociological concept) Fuse (electrical) Deep learning Feature (linguistics) Machine learning Feature learning Power (physics) Mathematics Engineering Psychology Cognitive psychology

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
54
Refs
0.12
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Generative Adversarial Networks and Image Synthesis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.