JOURNAL ARTICLE

Deep reinforcement learning-based reactive trajectory planning method for UAVs

Lijia CaoLin WangYang LiuWeihong XuChuang Geng

Year: 2024 Journal:   Proceedings of the Institution of Mechanical Engineers Part G Journal of Aerospace Engineering Vol: 238 (10)Pages: 1018-1037   Publisher: SAGE Publishing

Abstract

In order to improve the ability of avoiding dynamic threats during the flight of unmanned aerial vehicles (UAVs), a deep reinforcement learning-based reactive trajectory planning method is proposed in this paper. Firstly, a constrained Rapidly-exploring Random Tree-Connect algorithm (C-RRT-Connect) is proposed as the basic algorithm of reactive trajectory planning to globally plan for avoiding static obstacles in the environment. The C-RRT-Connect algorithm introduces the idea of target attraction to constrain the optimal growth point in the RRT-Connect algorithm. Then, based on the global trajectory, the local optimization is carried out according to the dynamic threats detected by the UAV during the flight. According to the real-time relative state between the UAV and the detected dynamic threat, the reaction sampling points and directional coefficients for avoiding the corresponding dynamic threat are generated online via the action network trained with the depth deterministic policy gradient algorithm (DDPG). And then the local trajectory is adjusted to modify the flight trajectory of the UAV to achieve reactive obstacle avoidance. The simulation experiment firstly compares the global trajectory planning performance of C-RRT-Connect and RRT-Connect in static environment, and secondly compares the local trajectory planning performance of DDPG algorithm and the artificial potential field method in dynamic environment. The experimental results show that in static environment, C-RRT-Connect algorithm has faster searching speed, less invalid samples and higher searching trajectory quality than RRT-Connect algorithm; In a dynamic environment, DDPG algorithm reduces the average running time by about 26% compared with the artificial potential field method, and has a stronger ability to evade dynamic threats in real time.

Keywords:
Trajectory Reinforcement learning Computer science Trajectory optimization Random tree Obstacle avoidance Motion planning Control theory (sociology) Artificial intelligence Robot Control (management) Mobile robot

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
18
Refs
0.06
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Robotic Path Planning Algorithms
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Autonomous Vehicle Technology and Safety
Physical Sciences →  Engineering →  Automotive Engineering
Guidance and Control Systems
Physical Sciences →  Engineering →  Aerospace Engineering

Related Documents

JOURNAL ARTICLE

A Trajectory Planning and Tracking Method Based on Deep Hierarchical Reinforcement Learning

Jiajie ZhangBao‐Lin YeXin WangLingxi LiBo Song

Journal:   Journal of Intelligent and Connected Vehicles Year: 2025 Vol: 8 (2)Pages: 9210056-1
JOURNAL ARTICLE

Research on trajectory planning based on deep reinforcement learning

Zan ZhouRui HuZheng GongYuanqiang Zhang

Journal:   Journal of Physics Conference Series Year: 2024 Vol: 2882 (1)Pages: 012065-012065
JOURNAL ARTICLE

Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints

IJSREM JOURNAL

Journal:   INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT Year: 2023 Vol: 07 (08)
© 2026 ScienceGate Book Chapters — All rights reserved.