The growing adoption of serverless computing has highlighted critical challenges in resource allocation, policy fairness, and energy efficiency within multitenancy cloud environments. This research proposes a reinforcement learning (RL)-based adaptive resource allocation framework to address these issues. The framework models resource allocation as a Markov Decision Process (MDP) with dynamic states that include latency, resource utilization, and energy consumption. A reward function is designed to optimize the throughput, latency, and energy efficiency while ensuring fairness among tenants. The proposed model demonstrates significant improvements over heuristic approaches, achieving a 50% reduction in latency (from 250 ms to 120 ms), a 38.9% increase in throughput (from 180 tasks/s to 250 tasks/s), and a 35% improvement in energy efficiency. Additionally, the model reduces operational costs by 40%, achieves SLA compliance rates above 98%, and enhances fairness by lowering the Gini coefficient from 0.25 to 0.10. Under burst loads, the system maintains a service level objective success rate of 94% with a time to scale of 6 s. These results underscore the potential of RL-based solutions for dynamic workload management, paving the way for more scalable, cost-effective, and sustainable serverless multitenancy systems.
P. VijayG VamshiHarisankar HaridasV. Reddy