Abstract

Human-Robot Interaction deals with the direct use of robotic systems to interact with humans in particular context. In this paper, a humanoid is developed which can understand the commands in the form of speech and gesture. A connected-word speaker-independent speech recognition system is built using Mel Frequency Cepstral Coefficient and Gaussian Mixture Model in Kaldi toolkit. Gesture recognition is implemented using Convolutional Neural Network.

Keywords:
Computer science Humanoid robot Gesture Speech recognition Mel-frequency cepstrum Convolutional neural network Gesture recognition Artificial intelligence Context (archaeology) Modal Robot Feature extraction

Metrics

1
Cited By
0.20
FWCI (Field Weighted Citation Impact)
19
Refs
0.52
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Hand Gesture Recognition Systems
Physical Sciences →  Computer Science →  Human-Computer Interaction
Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Gait Recognition and Analysis
Physical Sciences →  Engineering →  Biomedical Engineering

Related Documents

JOURNAL ARTICLE

Multi-Modal Humanoid

A N SrinidhiAjay S KamathR. Kumaraswamy

Year: 2020 Vol: 44 Pages: 0799-0802
JOURNAL ARTICLE

Randomized multi-modal motion planning for a humanoid robot manipulation task

Kris HauserVictor Ng‐Thow‐Hing

Journal:   The International Journal of Robotics Research Year: 2010 Vol: 30 (6)Pages: 678-698
© 2026 ScienceGate Book Chapters — All rights reserved.