This paper presents a technique to learn flexible action selection in autonomous, multi-modal human-robot interaction (HRI) from observing multi-modal human-human interaction (HHI). A model is generated using the proposed technique with symbolic states and actions, representing the scope of the observed mission. Variations in human behavior can be learned as stochastic action effects while execution time perception noise is taken into account, using likelihood models. During execution, the model is used for dynamic action selection in HRI situations. The model as well as the evaluation system integrate the interaction elements of spoken dialog, human body configuration and exchanged objects. The technique is evaluated on a multi-modal service robot which is both able to observe the demonstration of two humans as well as execute the generated mission autonomously.
J. FritschMarcus KleinehagenbrockSebastian LangThomas PlötzGernot A. FinkGerhard Sagerer
Saeed Shiry GhidaryY. NakataHideo SaitôMasashi HattoriT. Takamori
Hideo SaitôK. IshimuraMasashi HattoriT. Takamori
Sebastian WallkötterMichael JoannouSamuel WestlakeTony Belpaeme