Multi-task imitation learning (MTIL) is an effective approach to training an autonomous agent that is capable of performing multiple tasks using multi-task expert demonstrations. Since different tasks often share similarities, learning them simultaneously can greatly improve training efficiency and allow better generalization of the MTIL agent. However, existing MTIL methods usually suffer from negative transfer where simultaneously learning multiple tasks results in lower performance than learning only a single task. In this work, in order to address the problem, a novel multi-task imitation learning agent, namely DRL–GAMIL, is proposed. The proposed DRL–GAMIL agent utilizes disentangled representation learning and Generative Adversarial Networks to effectively extract task-shared and task-specific features. Those features are then leveraged to find an optimal policy that allows DRL–GAMIL to perform consistently well on multiple tasks. The proposed DRL–GAMIL agent is evaluated on three different simulated tasks. The experimental results show that DRL–GAMIL can provide a high performance compared to other baselines.
Yang LiuZhaowen WangHailin JinIan Wassell
Guoyu ZuoKexin ChenJiahao LuXiangsheng Huang
Tailong XiaoJingzheng HuangHongjing LiJianping FanGuihua Zeng
Insu JeonWonkwang LeeMyeongjang PyeonGunhee Kim
Jonathan LacotteMohammad GhavamzadehYinlam ChowMarco Pavone