This is because, in distributed systems, resources are allocated and deallocated continuously.They are suboptimal because traditional algorithms need constant finetuning and direct adjustments to performance based on the existing conditions.This paper aims to assess the ability of contemporary-developed models to increase productivity.These models can be trained to make such decisions in real-time through machine-learning techniques, leading to increased efficiency, scalability, and flexibility.