A complete prototype for multi-modal interaction between humans and multi-robot systems is described. The application focus is on search and rescue missions. From the human-side, speech and arm and hand gestures are combined to select, localize, and communicate task requests and spatial information to one or more robots in the field. From the robot side, LEDs and vocal messages are used to provide feedback to the human. The robots also employ coordinated autonomy to implement group behaviors for mixed initiative interaction. The system has been tested with different robotic platforms based on a number of different useful interaction patterns.
J. FritschMarcus KleinehagenbrockSebastian LangThomas PlötzGernot A. FinkGerhard Sagerer
Shih-Huan TsengTung-Yen WuChing-Ying ChengLi‐Chen Fu
Saeed Shiry GhidaryY. NakataHideo SaitôMasashi HattoriT. Takamori