JOURNAL ARTICLE

Edge computing dynamic unloading based on deep reinforcement learning

Abstract

This article considers a multiple Node server and multiple user scenario and proposes a joint user association and power allocation optimization scheme to minimize power consumption and queuing delay. Firstly, a network and computational unloading model is established based on the comprehensive consideration of random task arrival and time-varying wireless channels. Then, the optimization goal is to minimize the average long-term service cost. A dynamic computing unloading and resource allocation algorithm based on mixed decision deep reinforcement learning is proposed for this research objective. By calling the Actor part of DDPG and combining the Critc part of DDPG with D3QN, the mixed decision problem in edge node scenarios is solved. The proposed algorithm has better stability and faster convergence compared to baseline algorithms such as DQN. At the same time, under different task arrival rates, the average system service cost of the proposed algorithm is significantly reduced.

Keywords:
Computer science Reinforcement learning Convergence (economics) Node (physics) Enhanced Data Rates for GSM Evolution Queueing theory Resource allocation Task (project management) Mobile edge computing Edge computing Resource management (computing) Mathematical optimization Distributed computing Queuing delay Baseline (sea) Artificial intelligence Computer network Engineering

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
13
Refs
0.27
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Age of Information Optimization
Physical Sciences →  Computer Science →  Computer Networks and Communications
Context-Aware Activity Recognition Systems
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.