Pedestrian trajectory prediction is a key technology in autonomous driving. Due to the variability of pedestrian trajectories and complex interactions, effective spatial-temporal feature extraction and fusion of trajectories is a key point. Most previous studies did not explicitly consider the trend of the interaction between pedestrians, which can help the model focus on adjacent pedestrians with high impact on the future motion of the predicted target. To address this issue, we propose a Multi-dimensional Spatial-Temporal fusion Graph attention network, called MST-G. Specifically, the directed graphs is used to model the interactions among pedestrians. Meanwhile, on the basis of using spatial-temporal convolution to obtain trajectory features with interaction, we add edge convolution to extract the temporal continuity of the interaction. Finally, the LSTM codec is used for trajectory generation. Experiments show that our model achieves better performance on two publicly available population datasets (ETH and UCY).
Pengqian HanPartha S. RoopJiamou LiuTianzhe BaoYifei Wang
Hongyan GuoYanran LiuQingyu MengJialin LiHong Chen
Yanran LiuHongyan GuoQingyu MengJialin Li
Guihong LuiChangjiang PanXiaoyan ZhangQiangkui Leng