Abstract: This study presents a reinforcement learning-based framework for optimizing resource allocation in cloud-edge systems, with a focus on latency reduction and energy efficiency. The proposed model leverages Q-learning to dynamically allocate computing resources, adapting to real-time workload fluctuations. Performance is benchmarked against heuristic-based methods using synthetic workload simulations over 100 episodes. Results indicate that the Q-learning approach achieves significantly lower latency and energy consumption. This performance evaluation provides evidence for deploying learning-driven strategies in edge computing environments to enhance operational efficiency while minimizing computational overhead. Keywords Reinforcement Learning, Cloud-Edge Systems, Resource Allocation, Q-Learning, Latency Optimization, Energy Efficiency
Dimitrios KonidarisPolyzois SoumplisAndreas VarvarigosPanagiotis Kokkinos
Iryanto JayaYusen LiWentong Cai
R. SivasubramanianMohammed TariqueC. N. RajalakshmiHarshavardhan Nerella