JOURNAL ARTICLE

Autonomous UAV Visual Navigation Using an Improved Deep Reinforcement Learning

Hussein SammaSami El Ferik

Year: 2024 Journal:   IEEE Access Vol: 12 Pages: 79967-79977   Publisher: Institute of Electrical and Electronics Engineers

Abstract

In recent years, unmanned aerial vehicles (UAVs) have grown in popularity for a variety of purposes, including parcel delivery, search operations for missing persons, and surveillance. However, autonomously navigating UAVs in dynamic environments is a challenging task due to the presence of moving objects like pedestrians. In addition, traditional deep reinforcement learning approaches suffer from slow learning rates in dynamic situations and they need substantial training data. To improve learning performance, the present study proposed an enhanced deep reinforcement learning approach that encompasses two distinct learning stages namely the reinforced and self-supervised. In the reinforced learning stage, the deep Q-learning network (DQN) has been implemented and trained guided by the loss in the bellmen equation. On the other hand, the self-supervised stage is responsible for fine-tuning the backbone layers of DQN and it was directed by the contrastive loss function. The main benefit of incorporating the self-supervised stage is to speed up the encoding of the input scene captured by the UAV camera. To further enhance the navigation performances, an obstacle detection model was embedded to reduce UAV collisions. For experimental analysis, we have utilized an outdoor UAV-simulated environment called Blocks. This environment contains stationary objects that mimic buildings, as well as moving pedestrians. The study undertaken indicates that the implementation of the self-supervised stage led to significant improvements in navigation performance. Specifically, the simulated UAV was able to navigate longer distances in the correct direction toward the goal point. Moreover, the conducted analysis shows a significant navigation performance as compared with other DQN-based approaches like double DQN and dueling DQN.

Keywords:
Computer science Reinforcement learning Artificial intelligence Computer vision Human–computer interaction

Metrics

11
Cited By
14.51
FWCI (Field Weighted Citation Impact)
24
Refs
0.98
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
Robotic Path Planning Algorithms
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.