JOURNAL ARTICLE

Action Recognition using Spatial and Temporal Features with Kernel SVM

Narayanasamy Nivetha

Year: 2023 Journal:   International Journal For Multidisciplinary Research Vol: 5 (4)

Abstract

A new low-level visual feature, called Spatio-temporal context distribution feature of interest points is used to describe human actions. Each action video is expressed as a set of relative XYT coordinates between interest points listed pair wise in a local region. From the input image frames the Locally Weighted Word Context (LWWC ) descriptor encodes the spatial context interest points rather than being limited to a single interest point and the Graph Regularized Nonnegative Matrix Factorization (GNMF) is used to encode the geometrical information by constructing a nearest neighbour graph. By extracting the kernel weights of the obtained feature variables , the kernel weighted SVM is modelled to jointly capture the compatibility between multilevel action features and action classes and the compatibility between multilevel scene features and scene classes. The contextual relationship between action classes and scene classes is derived using the kernel weight as a variable.

Keywords:
Pattern recognition (psychology) Artificial intelligence ENCODE Kernel (algebra) Mathematics Computer science Action recognition Point of interest Graph kernel Feature vector Graph Multiple kernel learning Kernel method Support vector machine Kernel embedding of distributions Theoretical computer science Combinatorics

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
7
Refs
0.09
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Video Surveillance and Tracking Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Gait Recognition and Analysis
Physical Sciences →  Engineering →  Biomedical Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.