This paper presents a reasoning system for a multi-modal service robot with human-robot interaction. The reasoning system uses partially observable Markov decision processes (POMDPs) for decision making and an intermediate level for bridging the gap of abstraction between multi-modal real world sensors and actuators on the one hand and POMDP reasoning on the other. A filter system handles the abstraction of multi-modal perception while preserving uncertainty and model-soundness. A command sequencer is utilized to control the execution of symbolic POMDP decisions on multiple actuator components. By using POMDP reasoning, the robot is able to deal with uncertainty in both observation and prediction of human behavior and can balance risk and opportunity. The system has been implemented on a multi-modal service robot and is able to let the robot act autonomously in modeled human-robot interaction scenarios. Experiments evaluate the characteristics of the proposed algorithms and architecture.
Marcos Maroto‐GómezAllison Huisa-RojasÁlvaro Castro‐GonzálezMaría MalfázMiguel Á. Salichs
J. FritschMarcus KleinehagenbrockSebastian LangThomas PlötzGernot A. FinkGerhard Sagerer
Lisa ScherfLisa Alina GascheEya ChemanguiDorothea Koert
Jargalbaatar YuraBat-Erdene ByambasurenDonghan Kim