JOURNAL ARTICLE

Interaction-Aware Crowd Navigation via Augmented Relational Graph Learning

Abstract

Safe and effective navigation with a socially-compliant manner in crowd environment is an essential yet challenging task for mobile robots. Previous works have shown the advantages of deep reinforcement learning in learning socially cooperative navigation policy. However, most previous learning methods incur slow convergence and limited action space because they focus on value-based models, which only learn the discrete action navigation policy under sparse rewards. To overcome these limitations, an augmented relational graph based reinforcement learning method CEM-RGL is proposed, which incorporates cross entropy method(CEM) into a relational graph learning(RGL) framework to get sufficient samples in continuous state-action space during training; and introduces graph attention network(GAT) to extract efficient and scalable representation for crowd-robot interaction. A reward shaping technique is implied to accelerate the training convergence. Evaluation compared with the state-of-the-art methods in simulation experiments demonstrates that the crowd navigation policy with our augmented training method has a higher success rate and a higher cumulative reward return.

Keywords:
Reinforcement learning Computer science Graph Artificial intelligence Scalability Machine learning Robot Entropy (arrow of time) Mobile robot Feature learning Human–computer interaction Theoretical computer science

Metrics

2
Cited By
0.17
FWCI (Field Weighted Citation Impact)
39
Refs
0.50
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Evacuation and Crowd Dynamics
Physical Sciences →  Engineering →  Ocean Engineering
Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Autonomous Vehicle Technology and Safety
Physical Sciences →  Engineering →  Automotive Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.