JOURNAL ARTICLE

Jointly Learning Multi-view Features for Human Action Recognition

Abstract

Classical methods of multi-view human action recognition focus on constructing similar feature between each view so that videos under different view can be classified identically. However, the features from a certain action under different views are variable especially when the view changes sharply. Using the similarity from different views solely can't get a desirable result. In this paper we proposed a joint learining model for jointly learning features from different views. Our model explores the shared information of an action under different views and the specific information of an action under a certain view. The combination of the shared and specific information shows a distinguishable feature. The features from different views show are reconstructed by a linear projection matrix so that they can show a same structure. To obtain a optimal solution under a certain convergence, the model consists of a three-step iterative optimization process. The effectiveness of our method has been verified on WVU dataset.

Keywords:
Computer science Focus (optics) Feature (linguistics) Artificial intelligence Similarity (geometry) Action (physics) Action recognition Process (computing) Convergence (economics) Projection (relational algebra) Independent and identically distributed random variables Variable (mathematics) Pattern recognition (psychology) Machine learning Image (mathematics) Mathematics Algorithm Random variable

Metrics

1
Cited By
0.10
FWCI (Field Weighted Citation Impact)
12
Refs
0.39
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Gait Recognition and Analysis
Physical Sciences →  Engineering →  Biomedical Engineering
Video Surveillance and Tracking Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.