Zelin JiZhijin QinXiaoming Tao
In cellular networks, resource allocation is usually performed in a centralized way, which brings huge computation complexity to the base station (BS) and high transmission overhead. This paper introduces a distributed resource allocation method that aims to maximize energy efficiency (EE) while ensuring quality of service (QoS) for users. Specifically, to address the challenge of fast-varying wireless channel conditions, we propose a robust meta federated reinforcement learning ( MFRL ) framework that enables local users to optimize transmit power and assign channels using locally trained neural network models. This approach offloads the computational burden from the cloud server to the local users, reducing transmission overhead associated with local channel state information. The BS performs the meta-learning procedure to initialize a general global model, enabling rapid adaptation to different environments and improved EE performance. The federated learning technique, based on decentralized reinforcement learning, promotes collaboration and mutual benefits among users. Analysis and numerical results demonstrate that the proposed MFRL framework accelerates the reinforcement learning process, decreases transmission overhead, and offloads computation, while outperforming the conventional decentralized reinforcement learning algorithm in terms of convergence speed and EE performance across various scenarios.
Giang NguyenDerek Kwaku Pobi AsieduJi-Hoon Yun
Kaidi XuShenglong ZhouGeoffrey Ye Li
Kaidi XuShenglong ZhouGeoffrey Ye Li
Zhouxiang WuGenya IshigakiRiti GourCongzhou LiJason P. Jue