JOURNAL ARTICLE

Computation Offloading in E-RAN via Deep Reinforcement Learning

Abstract

The current standards for fifth generation mobile communications pose significant performance challenges to current radio access networks. Elastic Radio Access Network (E-RAN) is currently considered an effective solution for future 5G access networks. Edge computing sinks computing and storage to edge nodes closer to users to meet the massive computing needs of large terminal devices. The integration of edge computing and wireless access network is an important trend in the field of Internet and communication. Therefore, task offloading and resource allocation in the E-RAN network architecture have become key issues that need to be addressed. We formalize the computational offload problem as a long-term optimization problem, with the goal of minimizing the total cost of system latency and energy consumption, and using an improved DQN network to determine offload decisions. Experimental results show that our proposed method can achieve better efficiency compared to other baseline methods.

Keywords:
Computer science Mobile edge computing Edge computing Radio access network C-RAN Computer network Distributed computing Reinforcement learning Cellular network Wireless The Internet Enhanced Data Rates for GSM Evolution Base station Server Telecommunications Mobile station Artificial intelligence

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
12
Refs
0.11
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Software-Defined Networks and 5G
Physical Sciences →  Computer Science →  Computer Networks and Communications
Advanced MIMO Systems Optimization
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.