Cellular networks' congestion has been one of the most common problems in cellular networks due to the huge increase in network load resulted from enhancing communication quality as well as increasing the number of users. Since mobile users are not uniformly distributed in the network, the need for load balancing as a cellular networks' self-optimization technique has increased recently. Then, the congestion problem can be handled by evenly distributing the network load among the network resources. Lots of research has been dedicated to developing load balancing models for cellular networks. Most of these models rely on adjusting the Cell Individual Offset (CIO) parameters which are designed for self-optimization techniques in cellular networks. In this paper, a new deep reinforcement learning-based load balancing approach is proposed as a solution for the LTE Downlink congestion problem. This approach does not rely only on adapting the CIO parameters, but it rather has two degrees of control; the first one is adjusting the CIO parameters, and the second is adjusting the eNodeBs' transmission power. The proposed model uses Double Deep Q-Network (DDQN) to learn how to adjust these parameters so that a better load distribution in the overall network is achieved. Simulation results prove the effectiveness of the proposed approach by improving the network overall throughput by up to 21.4% and 6.5% compared to the base-line scheme and the scheme that only adapts CIOs, respectively.
Hao-Hsuan ChangHao ChenJianzhong ZhangLingjia Liu
Yue XuWenjun XuZhi WangJiaru LinShuguang Cui
Lam Ngoc DinhPham Tran Anh QuangJérémie Leguay
Muhammad Zeeshan AsghariMetin ÖztürkJyri Hämäläinen