Abstract

As a structured representation of the image content, the visual scene graph (visual relationship) acts as a bridge between computer vision and natural language processing. Existing models on the scene graph generation task notoriously require tens or hundreds of labeled samples. By contrast, human beings can learn visual relationships from a few or even one example. Inspired by this, we design a task named One-Shot Scene Graph Generation, where each relationship triplet (e.g., "dog-has-head'') comes from only one labeled example. The key insight is that rather than learning from scratch, one can utilize rich prior knowledge. In this paper, we propose Multiple Structured Knowledge (Relational Knowledge and Commonsense Knowledge) for the one-shot scene graph generation task. Specifically, the Relational Knowledge represents the prior knowledge of relationships between entities extracted from the visual content, e.g., the visual relationships "standing in'', "sitting in'', and "lying in'' may exist between "dog'' and "yard'', while the Commonsense Knowledge encodes "sense-making'' knowledge like "dog can guard yard''. By organizing these two kinds of knowledge in a graph structure, Graph Convolution Networks (GCNs) are used to extract knowledge-embedded semantic features of the entities. Besides, instead of extracting isolated visual features from each entity generated by Faster R-CNN, we utilize an Instance Relation Transformer encoder to fully explore their context information. Based on a constructed one-shot dataset, the experimental results show that our method significantly outperforms existing state-of-the-art methods by a large margin. Ablation studies also verify the effectiveness of the Instance Relation Transformer encoder and the Multiple Structured Knowledge.

Keywords:
Computer science Commonsense knowledge Artificial intelligence Knowledge graph Scene graph Graph Natural language processing Knowledge base Theoretical computer science Rendering (computer graphics)

Metrics

26
Cited By
1.89
FWCI (Field Weighted Citation Impact)
29
Refs
0.88
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Zero-Shot Scene Graph Generation with Knowledge Graph Completion

Xiang YuRuoxin ChenJie LiJiawei SunShijing YuanHuxiao JiXinyu LuChentao Wu

Journal:   2022 IEEE International Conference on Multimedia and Expo (ICME) Year: 2022 Pages: 1-6
JOURNAL ARTICLE

Zero-shot Scene Graph Generation with Relational Graph Neural Networks

Xiang YuJie LiShijing YuanChao WangChentao Wu

Journal:   2022 26th International Conference on Pattern Recognition (ICPR) Year: 2022 Pages: 1894-1900
JOURNAL ARTICLE

Decomposed Prototype Learning for Few-Shot Scene Graph Generation

X. Y. LiJun XiaoGuikun ChenYinfu FengYi YangAn-An LiuLong Chen

Journal:   ACM Transactions on Multimedia Computing Communications and Applications Year: 2024 Vol: 21 (1)Pages: 1-24
JOURNAL ARTICLE

Zero-shot Scene Graph Generation via Triplet Calibration and Reduction

Jiankai LiYunhong WangWeixin Li

Journal:   ACM Transactions on Multimedia Computing Communications and Applications Year: 2023 Vol: 20 (1)Pages: 1-21
© 2026 ScienceGate Book Chapters — All rights reserved.