Shanxing ZhouWeichao ZhuangGuodong YinHaoji LiuChunlong Qiu
This paper proposes a cooperative merging control strategy of connected and automated vehicles (CAVs) using distributed multi-agent Deep Deterministic Policy Gradient (MADDPG). First, the on-ramp merging scenario and vehicle model are built, considering the safe merging distances and acceleration limits. Second, the MADDPG is adopted to learn the cooperative control strategy considering the rear-end safety, lateral safety, and vehicle energy consumption. A distributed architecture is proposed to improve training efficiency. Finally, several on-ramp merging scenarios are simulated. Simulation results show that the distributed MADDPG merging strategy reduces energy consumption by 7.4% and travel time by 5.3% compared to the regular merging strategy.
Ang JiJiayi HuangZiye QinZhanbo SunRuibin ZhaoGuoqian Zheng
Tianchuang MengBiao XuXiaohui QinJin HuangManjiang HuZhihua Zhong
Abu Jafar Md MuzahidYang ShiZejiang WangAnye ZhouAdian CookChieh Ross WangZhenbo Wang
Chinmay MahabalHua FangHonggang Wang