JOURNAL ARTICLE

Wearable multi-modal interface for human multi-robot interaction

Abstract

A complete prototype for multi-modal interaction between humans and multi-robot systems is described. The application focus is on search and rescue missions. From the human-side, speech and arm and hand gestures are combined to select, localize, and communicate task requests and spatial information to one or more robots in the field. From the robot side, LEDs and vocal messages are used to provide feedback to the human. The robots also employ coordinated autonomy to implement group behaviors for mixed initiative interaction. The system has been tested with different robotic platforms based on a number of different useful interaction patterns.

Keywords:
Robot Human–computer interaction Computer science Human–robot interaction Gesture Focus (optics) Wearable computer Task (project management) Modal Interface (matter) Field (mathematics) Artificial intelligence Engineering Embedded system

Metrics

34
Cited By
3.66
FWCI (Field Weighted Citation Impact)
13
Refs
0.97
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech and dialogue systems
Physical Sciences →  Computer Science →  Artificial Intelligence
Hand Gesture Recognition Systems
Physical Sciences →  Computer Science →  Human-Computer Interaction
Social Robot Interaction and HRI
Social Sciences →  Psychology →  Social Psychology
© 2026 ScienceGate Book Chapters — All rights reserved.