JOURNAL ARTICLE

Opportunistic Task Offloading in UAV-assisted Mobile Edge Computing: A Deep Reinforcement Learning Approach

Abstract

Mobile edge computing (MEC) aims to extend cloud services to the network edge to reduce network traffic and latency for 5G mobile networks. Unmanned aerial vehicles (UAVs) are being used as assisted edge clouds for large-scale sparsely-distributed user equipment, due to their flexible deployment, wide coverage, and reliable wireless communication. In this paper, we propose a deep Q learning-based opportunistic task offloading algorithm for UAV-assisted mobile edge computing. To this end, we formulate a Markov decision process (MDP) model in which the UAV can choose whether to offload tasks to the cloud server or process them on the local MEC server. Extensive simulations show that our task offloading algorithm outperforms both offload-only and local-only algorithms, ensuring satisfactory service quality for 5G services.

Keywords:
Computer science Mobile edge computing Reinforcement learning Edge computing Cloud computing Markov decision process Computer network Distributed computing Software deployment Enhanced Data Rates for GSM Evolution Server Cellular network Wireless network Task (project management) Quality of service Edge device Markov process Wireless Artificial intelligence Operating system

Metrics

2
Cited By
1.04
FWCI (Field Weighted Citation Impact)
13
Refs
0.84
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

UAV Applications and Optimization
Physical Sciences →  Engineering →  Aerospace Engineering
IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Privacy-Preserving Technologies in Data
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.