JOURNAL ARTICLE

Driving Behavior Modeling Using Naturalistic Human Driving Data With Inverse Reinforcement Learning

Zhiyu HuangJingda WuChen Lv

Year: 2021 Journal:   IEEE Transactions on Intelligent Transportation Systems Vol: 23 (8)Pages: 10239-10251   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Driving behavior modeling is of great importance for designing safe, smart, and personalized autonomous driving systems. In this paper, an internal reward function-based driving model that emulates the human's decision-making mechanism is utilized. To infer the reward function parameters from naturalistic human driving data, we propose a structural assumption about human driving behavior that focuses on discrete latent driving intentions. It converts the continuous behavior modeling problem to a discrete setting and thus makes maximum entropy inverse reinforcement learning (IRL) tractable to learn reward functions. Specifically, a polynomial trajectory sampler is adopted to generate candidate trajectories considering high-level intentions and approximate the partition function in the maximum entropy IRL framework. An environment model considering interactive behaviors among the ego and surrounding vehicles is built to better estimate the generated trajectories. The proposed method is applied to learn personalized reward functions for individual human drivers from the NGSIM highway driving dataset. The qualitative results demonstrate that the learned reward functions are able to explicitly express the preferences of different drivers and interpret their decisions. The quantitative results reveal that the learned reward functions are robust, which is manifested by only a marginal decline in proximity to the human driving trajectories when applying the reward function in the testing conditions. For the testing performance, the personalized modeling method outperforms the general modeling approach, significantly reducing the modeling errors in human likeness (a custom metric to gauge accuracy), and these two methods deliver better results compared to other baseline methods. Moreover, it is found that predicting the response actions of surrounding vehicles and incorporating their potential decelerations caused by the ego vehicle are critical in estimating the generated trajectories, and the accuracy of personalized planning using the learned reward functions relies on the accuracy of the forecasting model.

Keywords:
Reinforcement learning Computer science Reinforcement Inverse Artificial intelligence Engineering Mathematics Structural engineering

Metrics

185
Cited By
13.58
FWCI (Field Weighted Citation Impact)
35
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Autonomous Vehicle Technology and Safety
Physical Sciences →  Engineering →  Automotive Engineering
Traffic control and management
Physical Sciences →  Engineering →  Control and Systems Engineering
Traffic Prediction and Management Techniques
Physical Sciences →  Engineering →  Building and Construction

Related Documents

JOURNAL ARTICLE

Driving Behavior Modeling Based on Inverse Reinforcement Learning

Xiaobin XuWei HanBo LengLu Xiong

Journal:   SAE technical papers on CD-ROM/SAE technical paper series Year: 2024 Vol: 1
JOURNAL ARTICLE

Driving Behavior Modeling in Residensial Roads with Inverse Reinforcement Learning

Masamichi Shimosaka

Journal:   Journal of the Robotics Society of Japan Year: 2021 Vol: 39 (7)Pages: 631-636
JOURNAL ARTICLE

Instant Inverse Modeling of Stochastic Driving Behavior With Deep Reinforcement Learning

Dongsu LeeMinhae Kwon

Journal:   IEEE Transactions on Consumer Electronics Year: 2024 Vol: 71 (1)Pages: 2152-2162
© 2026 ScienceGate Book Chapters — All rights reserved.