JOURNAL ARTICLE

Learning flexible, multi-modal human-robot interaction by observing human-human-interaction

Abstract

This paper presents a technique to learn flexible action selection in autonomous, multi-modal human-robot interaction (HRI) from observing multi-modal human-human interaction (HHI). A model is generated using the proposed technique with symbolic states and actions, representing the scope of the observed mission. Variations in human behavior can be learned as stochastic action effects while execution time perception noise is taken into account, using likelihood models. During execution, the model is used for dynamic action selection in HRI situations. The model as well as the evaluation system integrate the interaction elements of spoken dialog, human body configuration and exchanged objects. The technique is evaluated on a multi-modal service robot which is both able to observe the demonstration of two humans as well as execute the generated mission autonomously.

Keywords:
Computer science Human–robot interaction Modal Robot Action (physics) Human–computer interaction Artificial intelligence Action selection Selection (genetic algorithm) Perception Dialog box Service robot Scope (computer science)

Metrics

5
Cited By
0.80
FWCI (Field Weighted Citation Impact)
20
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech and dialogue systems
Physical Sciences →  Computer Science →  Artificial Intelligence
Social Robot Interaction and HRI
Social Sciences →  Psychology →  Social Psychology
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.