JOURNAL ARTICLE

Learning a Memory-Enhanced Multi-Stage Goal-Driven Network for Egocentric Trajectory Prediction

Xiuen WuSien LiTao WangGe XuGeorge Papageorgiou

Year: 2024 Journal:   Biomimetics Vol: 9 (8)Pages: 462-462   Publisher: Multidisciplinary Digital Publishing Institute

Abstract

We propose a memory-enhanced multi-stage goal-driven network (ME-MGNet) for egocentric trajectory prediction in dynamic scenes. Our key idea is to build a scene layout memory inspired by human perception in order to transfer knowledge from prior experiences to the current scenario in a top-down manner. Specifically, given a test scene, we first perform scene-level matching based on our scene layout memory to retrieve trajectories from visually similar scenes in the training data. This is followed by trajectory-level matching and memory filtering to obtain a set of goal features. In addition, a multi-stage goal generator takes these goal features and uses a backward decoder to produce several stage goals. Finally, we integrate the above steps into a conditional autoencoder and a forward decoder to produce trajectory prediction results. Experiments on three public datasets, JAAD, PIE, and KITTI, and a new egocentric trajectory prediction dataset, Fuzhou DashCam (FZDC), validate the efficacy of the proposed method.

Keywords:
Computer science Trajectory Artificial intelligence Autoencoder Matching (statistics) Set (abstract data type) Generator (circuit theory) Key (lock) Computer vision Machine learning Deep learning Power (physics)

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
60
Refs
0.13
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Autonomous Vehicle Technology and Safety
Physical Sciences →  Engineering →  Automotive Engineering
Video Surveillance and Tracking Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.