JOURNAL ARTICLE

SR-GAN: Semantic Rectifying Generative Adversarial Network for Zero-shot Learning

Abstract

The existing Zero-Shot learning (ZSL) methods may suffer from the vague class attributes that are highly overlapped for different classes. Unlike these methods that ignore the discrimination among classes, in this paper, we propose to classify unseen image by rectifying the semantic space guided by the visual space. First, we pre-train a Semantic Rectifying Network (SRN) to rectify semantic space with a semantic loss and a rectifying loss. Then, a Semantic Rectifying Generative Adversarial Network (SR-GAN) is built to generate plausible visual feature of unseen class from both semantic feature and rectified semantic feature. To guarantee the effectiveness of rectified semantic features and synthetic visual features, a pre-reconstruction and a post reconstruction networks are proposed, which keep the consistency between visual feature and semantic feature. Experimental results demonstrate that our approach significantly outperforms the state-of-the-arts on four benchmark datasets.

Keywords:
Semantic feature Computer science Benchmark (surveying) Feature (linguistics) Artificial intelligence Class (philosophy) Generative adversarial network Visual space Generative grammar Consistency (knowledge bases) Feature vector Pattern recognition (psychology) Semantics (computer science) Semantic space Image (mathematics) Perception

Metrics

33
Cited By
3.38
FWCI (Field Weighted Citation Impact)
37
Refs
0.93
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Viral Infections and Outbreaks Research
Health Sciences →  Medicine →  Infectious Diseases
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.