Huan LiuHua QiuWenjie LuXiaonian Shan
In urban arterials, buses face dual constraints from signal-controlled intersections and bus stop dwell demands, and frequent start–stop cycles result in reduced operational efficiency and elevated energy consumption. To address this critical challenge, a sustainable eco-driving strategy integrating offline and online Reinforcement Learning (RL) is proposed in this study. Leveraging real-world trajectory data from a 15.47 km route with 31 stops, the energy consumption characteristics of electric buses under the combined effects of stops and intersections are systematically analyzed, and high energy consumption scenarios are precisely identified. An initial energy saving strategy is first trained using offline RL, and subsequently subjected to online optimization in a vehicle–infrastructure cooperative simulation environment that incorporates three typical stop configurations. The soft actor-critic algorithm is employed to reconcile the dual goals of energy efficiency and ride comfort. Simulation results reveal a significant improvement with the proposed strategy, achieving an 11.2% reduction in energy consumption and a 37.7% decrease in travel time compared to the Krauss benchmark model. This study confirms the effectiveness of RL in boosting the operational sustainability of public transport systems, offering a scalable technical framework to promote the development of green urban mobility. The research findings provide theoretical support and practical references for the large-scale promotion and engineering application of energy saving autonomous driving technology for electric buses.
Georg SchäferR. L. SeligerJakob RehrlStefan HuberSimon Hirlaender
Xiaoyu LiZhiyong ZhouChangyin WeiGao XiaoYibo Zhang
Thanh Thi NguyenNgoc Duy NguyenPeter VamplewSaeid NahavandiRichard DazeleyChee Peng Lim
Maximiliano TrimboliLuis Ávila