To satisfy rapidly increasing multimedia service requests from mobile users, content caching at the network edges (e.g., base stations) has been regarded as a promising technique in future mobile networks. In this paper, by virtue of Deep Reinforcement Learning (DRL) with respect to solving complicated control problems, we propose a framework on Double Deep Q-Network for cooperative edge caching in mobile networks. Particularly, we aim at minimizing the long-term average content fetching delay of mobile users without requiring any priori knowledge of content popularity distribution. Trace-driven simulation results show that our proposed framework outperforms some existing caching algorithms, including Least Recently Used (LRU), Least Frequently Used (LFU) and First-In First-Out (FIFO) caching strategies by 7%, 11% and 9% improvements, respectively. Besides, our proposed work is further shown that only average 4% performance loss exists compared to an omniscient oracle algorithm.
Meng DengHuan ZhouJiang KaiZheng HantongYue CaoPeng Chen
Zhe WangJia HuGeyong MinZhiwei Zhao
Wei JiangGang FengShuang QinYijing Liu
Chen ZhongM. Cenk GursoySenem Velipasalar