Abstract

Class prototype construction and matching are core aspects of few-shot action recognition. Previous methods mainly focus on designing spatiotemporal relation modeling modules or complex temporal alignment algorithms. Despite the promising results, they ignored the value of class prototype construction and matching, leading to unsatisfactory performance in recognizing similar categories in every task. In this paper, we propose GgHM, a new framework with Graph-guided Hybrid Matching. Concretely, we learn task-oriented features by the guidance of a graph neural network during class prototype construction, optimizing the intra- and inter-class feature correlation explicitly. Next, we design a hybrid matching strategy, combining frame-level and tuple-level matching to classify videos with multivariate styles. We additionally propose a learnable dense temporal modeling module to enhance the video feature temporal representation to build a more solid foundation for the matching process. GgHM shows consistent improvements over other challenging baselines on several few-shot datasets, demonstrating the effectiveness of our method. The code will be publicly available at https://github.com/jiazheng-xing/GgHM.

Keywords:
Computer science Boosting (machine learning) Artificial intelligence Matching (statistics) Graph Feature extraction Machine learning Pattern recognition (psychology) Feature (linguistics) Data mining Theoretical computer science

Metrics

35
Cited By
6.37
FWCI (Field Weighted Citation Impact)
55
Refs
0.96
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Diabetic Foot Ulcer Assessment and Management
Health Sciences →  Medicine →  Endocrinology, Diabetes and Metabolism

Related Documents

JOURNAL ARTICLE

Hybrid Relation Guided Set Matching for Few-shot Action Recognition

Xiang WangShiwei ZhangZhiwu QingMingqian TangZhengrong ZuoChangxin GaoRong JinNong Sang

Journal:   2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Year: 2022 Pages: 19916-19925
BOOK-CHAPTER

Rethinking Matching-Based Few-Shot Action Recognition

Juliette BertrandYannis KalantidisGiorgos Tolias

Lecture notes in computer science Year: 2023 Pages: 215-236
BOOK-CHAPTER

Convolutional Self-attention Guided Graph Neural Network for Few-Shot Action Recognition

Fei PanJie GuoYanwen Guo

Lecture notes in computer science Year: 2023 Pages: 401-412
BOOK-CHAPTER

Compound Prototype Matching for Few-Shot Action Recognition

Yifei HuangLijin YangYoichi Sato

Lecture notes in computer science Year: 2022 Pages: 351-368
© 2026 ScienceGate Book Chapters — All rights reserved.