JOURNAL ARTICLE

Task Offloading for UAV-based Mobile Edge Computing via Deep Reinforcement Learning

Abstract

With rapid increase of data processing demands from users in mobile edge computing (MEC), the conventional mobile edge servers (MESs) are no longer capable of providing timely and effective services. Against this background, we focus on applying unmanned aerial vehicle (UAV) as an MES to provide computational task offloading services for users. In this paper, we aim at maximizing the migration throughput of user tasks with limited energy at the UAV. To be specific, we first formulate the maximization problem as a semi-Markov decision process (SMDP) without transition probability. Then we propose the deep reinforcement learning (DRL)-based scheme of maximizing user tasks migration throughput to solve the maximization problem. The scheme realizes a maximum autonomic migration throughput of users with limited UAV energy and improves quality of service (QoS) of MEC to some extent. Simulation results demonstrate that the proposed scheme is sufficient with favourable convergence.

Keywords:
Computer science Reinforcement learning Mobile edge computing Markov decision process Throughput Server Quality of service Edge computing Distributed computing Task (project management) Enhanced Data Rates for GSM Evolution Convergence (economics) Markov process Energy consumption Maximization Computer network Wireless Artificial intelligence Mathematical optimization Engineering

Metrics

67
Cited By
7.87
FWCI (Field Weighted Citation Impact)
13
Refs
0.98
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

UAV Applications and Optimization
Physical Sciences →  Engineering →  Aerospace Engineering
IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Opportunistic and Delay-Tolerant Networks
Physical Sciences →  Computer Science →  Computer Networks and Communications
© 2026 ScienceGate Book Chapters — All rights reserved.