In the recent years, recognition of human actions and the interactions of human body bones provide crucial data. It has been applied in many fields from video intelligence to computer vision. The idea behind working of these have a common approach of using deep learning methods that include Convolutional Networks. The Graph convolution networks (GCN) is extensively used in recognition of skeleton action-based data. We point out that current GCN-based methods generally rely on specified graphical patterns (i.e., a hand-crafted structure of the joints in the skeleton), which hinders their potential to gather intricate connections between joints. Thus a better advanced model can be proposed out of the GCN-based model. This paper aims in delivering a novel model of Spatial Temporal Graph Convolutional Networks (ST-GCN) are interactive skeletons that learn from the spatial and temporal variability of input data(ST-GCN) [1]. We here use a large dataset –Kinetics to perform the analysis and predict the output for given skeletal data.
Yong LiZihang HeXiang YeZuguo HeKangrong Han
Chunzhuo WangZhong XueT. Sunil KumarGuido CampsHans HallezBart Vanrumste
Juanhui TuMengyuan LiuHong Liu
Sijie YanYuanjun XiongDahua Lin