JOURNAL ARTICLE

Deep Reinforcement Learning for Cooperative Edge Caching in Future Mobile Networks

Abstract

To satisfy rapidly increasing multimedia service requests from mobile users, content caching at the network edges (e.g., base stations) has been regarded as a promising technique in future mobile networks. In this paper, by virtue of Deep Reinforcement Learning (DRL) with respect to solving complicated control problems, we propose a framework on Double Deep Q-Network for cooperative edge caching in mobile networks. Particularly, we aim at minimizing the long-term average content fetching delay of mobile users without requiring any priori knowledge of content popularity distribution. Trace-driven simulation results show that our proposed framework outperforms some existing caching algorithms, including Least Recently Used (LRU), Least Frequently Used (LFU) and First-In First-Out (FIFO) caching strategies by 7%, 11% and 9% improvements, respectively. Besides, our proposed work is further shown that only average 4% performance loss exists compared to an omniscient oracle algorithm.

Keywords:
Computer science Reinforcement learning Computer network Oracle Base station Enhanced Data Rates for GSM Evolution Cellular network FIFO (computing and electronics) Cache Distributed computing Artificial intelligence

Metrics

32
Cited By
4.08
FWCI (Field Weighted Citation Impact)
23
Refs
0.94
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Caching and Content Delivery
Physical Sciences →  Computer Science →  Computer Networks and Communications
Opportunistic and Delay-Tolerant Networks
Physical Sciences →  Computer Science →  Computer Networks and Communications
Cooperative Communication and Network Coding
Physical Sciences →  Computer Science →  Computer Networks and Communications
© 2026 ScienceGate Book Chapters — All rights reserved.