JOURNAL ARTICLE

Enhanced DQN in Task Offloading Across Multi-Tier Computing Networks

Abstract

Effective task workload decisions can enhance the utilization of network and computing resources in edge computing, thereby reducing the timeout rate of time-sensitive tasks and decreasing the average task processing time. We introduced an abstracted multi-tier computing network environment that closely resembles real-world conditions compared to other studies. DQN, or Deep Q-Network, is a reinforcement learning algorithm that leverages deep neural networks to optimize decision-making in sequential decision tasks. We employed a decision-making strategy utilizing deep reinforcement learning, presenting an enhanced DQN model incorporating advanced techniques. Our validation demonstrated its superior performance compared to baseline strategies.

Keywords:
Computer science Reinforcement learning Timeout Workload Task (project management) Edge computing Baseline (sea) Artificial intelligence Artificial neural network Distributed computing Enhanced Data Rates for GSM Evolution Machine learning Computer network Operating system

Metrics

2
Cited By
0.88
FWCI (Field Weighted Citation Impact)
10
Refs
0.65
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Cloud Computing and Resource Management
Physical Sciences →  Computer Science →  Information Systems
Stochastic Gradient Optimization Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.