Song WangSonghao PiaoXiaokun LengZhicheng He
To achieve robot motion imitating, it is important to ensure morphological similarity, physical feasibility, and generalization of actions between robots and motion capture datasets. Traditional motion controllers require designing controllers for each motion type, which can be time-consuming to adjust controller parameters. However, reinforcement learning algorithms are increasingly used in robot motion control, enabling robots or physical simulation characters to learn skills such as maintaining balance or completing specific tasks. This paper presents a system for action learning imitation, allowing robots to imitate flexible graph-powered motion matching datasets. By incorporating domain randomization methods during training, the model can maintain robustness even when the environment or model is in error, enabling the action model obtained in simulation to be deployed to the real robot. To experimentally verify the proposed method, the paper designs a simulation environment for a 20 degree-of-freedom multi-joint bipedal robot and deploys the trained robot behavior model on the roban robot for imitation action learning and responsive dynamic walking.
Sang Hyoung LeeIl Hong SuhSylvain CalinonRolf Johansson
Zhipeng DongZhihao LiYunhui YanSylvain CalinonFei Chen
Tianwei ShenSiyu ZhuTian FangRunze ZhangLong Quan