Luca MarzariDavide CorsiEnrico MarchesiniAlessandro Farinelli
This work investigates the effects of Curriculum Learning (CL)-based\napproaches on the agent's performance. In particular, we focus on the safety\naspect of robotic mapless navigation, comparing over a standard end-to-end\n(E2E) training strategy. To this end, we present a CL approach that leverages\nTransfer of Learning (ToL) and fine-tuning in a Unity-based simulation with the\nRobotnik Kairos as a robotic agent. For a fair comparison, our evaluation\nconsiders an equal computational demand for every learning approach (i.e., the\nsame number of interactions and difficulty of the environments) and confirms\nthat our CL-based method that uses ToL outperforms the E2E methodology. In\nparticular, we improve the average success rate and the safety of the trained\npolicy, resulting in 10% fewer collisions in unseen testing scenarios. To\nfurther confirm these results, we employ a formal verification tool to quantify\nthe number of correct behaviors of Reinforcement Learning policies over desired\nspecifications.\n
Feiqiang LinZe JiChangyun WeiRaphael Grech
Shaohua LvYanjie LiQi LiuJianqi GaoXizheng PangMeiling Chen
Feiqiang LinZe JiChangyun WeiHanlin Niu
Jianqi GaoXizheng PangQi LiuYanjie Li
Honghu XueBenedikt HeinMohamed H. BakrGeorg SchildbachBengt AbelElmar Rueckert