JOURNAL ARTICLE

Deep Reinforcement Learning Based Task Offloading for UAV-Assisted Edge Computing

Abstract

Task offloading, underpinned by Unmanned Aerial Vehicle (UAV) relays within the realm of edge computing, has evolved as a vital mechanism to circumvent limitations associated with terminal computing capabilities. Contemporary offloading strategies predominantly prioritize either the minimization of energy consumption in individual UAVs or the adjustment of flight trajectories to accommodate a more expansive pool of mobile users. These strategies often overlook the mechanisms through which UAVs could effectively collaborate to optimize system resources. Given the energy constraints inherent to UAVs, adopting an appropriate cooperative offloading mechanism can ameliorate dilemmas tied to excessive loads or energy insufficiency in a single UAV. As a result, data offloading from the UAV swarm to edge servers can be maximized, mitigating resource wastage during inter-UAV task transfers. This paper introduces a Deep Q-Network-based Resource Allocation Policy (DRAP) tailored to manage task offloading upon the arrival of task data at the UAVs. This policy employs a Deep Reinforcement Learning (DRL) network for task data allocation, delegating suitable UAVs to perform offloading actions. The offloading strategy conducts a central assessment of the current state of all UAVs in the swarm and all pending task data, selecting the most suitable UAV for each task offloading operation. This method significantly alleviates load pressure in scenarios where a single UAV is overly burdened or energy-deprived. Given that local task data is not processed but merely transmitted, task offloading can be considered as a discrete action. This perception enables the transformation of the UAV task offloading challenge into a selection problem within a discrete action space, aimed at maximizing the utilization of UAV resources. Extensive simulations demonstrate the proposed solution beats the benchmark algorithms by providing higher task offload volume and lower communication resource consumption.

Keywords:
Computer science Reinforcement learning Task (project management) Mobile edge computing Energy consumption Distributed computing Server Edge computing Enhanced Data Rates for GSM Evolution Real-time computing Artificial intelligence Computer network Engineering

Metrics

3
Cited By
1.56
FWCI (Field Weighted Citation Impact)
10
Refs
0.86
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

UAV Applications and Optimization
Physical Sciences →  Engineering →  Aerospace Engineering
IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.