Microsoft Kinect's output is a multi-modal signal which gives RGB videos, depth sequences and skeleton information simultaneously. Various action recognition techniques focused on different single modalities of the signals and built their classifiers over the features extracted from one of these channels. For better recognition performance, it's desirable to fuse these multi-modal information into an integrated set of discriminative features. Most of current fusion methods merged heterogeneous features in a holistic manner and ignored the complementary properties of these modalities in finer levels. In this paper, we proposed a new hierarchical bag-of-words feature fusion technique based on multi-view structured spar-sity learning to fuse atomic features from RGB and skeletons for the task of action recognition.
Weiyao XuMuqing WuMin ZhaoTing Xia
Alban Main de BoissiereRita Noumeir
Xinghang HuYanli JiGedamu Alemu Kumie
Long GaoKe YangWanlin ZhaoYang ZhangJiang YanGang HeYunsong Li