This paper discusses an AI approach to improve how data is processed in real time by adjusting the way resources are given to distributed systems. Our framework adapts and changes the amount of CPU, available memory, and network use to hit performance goals due to machine learning and reinforcement learning. The system has shown improvement in latency, throughput, and using resources more efficiently during evaluation with many workloads. To tackle the challenges produced by intense and fast data, we make use of predictive analytics and scheduling that can adjust automatically. Findings point out that using AI for resource efficiency is a flexible and dependable way to run modern data engineering tasks both in clouds and at edges.