JOURNAL ARTICLE

Deep Reinforcement Learning based Resource Allocation in Low Latency Edge Computing Networks

Abstract

In this paper, we investigate strategies for the allocation of computational resources using deep reinforcement learning in mobile edge computing networks that operate with finite blocklength codes to support low latency communications. The end-to-end (E2E) reliability of the service is addressed, while both the delay violation probability and the decoding error probability are taken into account. By employing a deep reinforcement learning method, namely deep Q-learning, we design an intelligent agent at the edge computing node to develop a real-time adaptive policy for computational resource allocation for offloaded tasks of multiple users in order to improve the average E2E reliability. Via simulations, we show that under different task arrival rates, the realized policy serves to increase the task number that decreases the delay violation rate while guaranteeing an acceptable level of decoding error probability. Moreover, we show that the proposed deep reinforcement learning approach outperforms the random and equal scheduling benchmarks.

Keywords:
Reinforcement learning Computer science Latency (audio) Decoding methods Scheduling (production processes) Edge computing Q-learning Distributed computing Deep learning Enhanced Data Rates for GSM Evolution Reliability (semiconductor) Resource allocation Artificial intelligence Computer network Mathematical optimization Algorithm Telecommunications

Metrics

147
Cited By
16.77
FWCI (Field Weighted Citation Impact)
16
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Age of Information Optimization
Physical Sciences →  Computer Science →  Computer Networks and Communications
IoT Networks and Protocols
Physical Sciences →  Engineering →  Electrical and Electronic Engineering

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.