Yujie ZhaoTao PengYichen GuoWenbo Wang
Energy efficiency (EE) is acknowledged as a key performance indicator for 5G networks. This paper mainly studies the problem of energy efficient power allocation in 5G Ultra-dense network (UDN). The existing power allocation algorithms mainly focus on the downlink, and most of them are based on analytic algorithms, which has high computational complexity and is difficult to meet the needs of large-scale deployment in UDN. In order to reduce the complexity, this paper proposes an uplink power allocation algorithm based on multi-agent reinforcement learning (MARL). Each user acts as an agent and all the users interact with the communication environment simultaneously. In the MARL framework of the proposed algorithm, we add a performance estimator to help train Q-network. Simulation results show the proposed algorithm performs efficiently in term of energy efficiency as well as maintaining a high network throughput at the same time. The complexity of the proposed algorithm is proved to be reduced by at least two magnitudes compared with the analytic algorithms.
Jihoon MoonSeungnyun KimHyungyu JuByonghyo Shim
Hyungyu JuSeungnyun KimYoungjoon KimHyojin LeeByonghyo Shim
Hyungyu JuSeungnyun KimYoungjoon KimByonghyo Shim
Yan ZhenLiang TaoDapeng WuTong TangRuyan Wang