In transportation systems, drivers usually choose their routes based on their own knowledge about the network. Such a knowledge is obtained from drivers' previous trips. When drivers are faced with jams they may change their routes to take a faster path. But this re-routing may not be a good choice because other drivers can proceed in the same way. Furthermore, such behaviour can create jams in other links. On the other hand, if drivers build their routes aiming at maximizing the overall travel time (system's utility), rather than their individual travel time (agents' utility), the whole system may benefit. This work presents two reinforcement learning algorithms for solving the route choice problem in road networks. The IQ-learning uses an individual reward function, which aims at finding a policy that maximizes the agents' utility. On the other hand, DQ-learning algorithm shapes the agents' reward based on difference rewards function, and aims at finding a route that maximizes the system's utility. Through experiments we show that DQ-learning is able to reduce the overall travel time when compared to other methods.
Sam DevlinLogan YliniemiDaniel Kudenko⋆Kagan Tumer
Arambam James SinghAkshat KumarHoong Chuin Lau
Xueqin LongJianxu MaoZhongbao QiaoPeng LiWei He