Today, many important internet services are provided on cloud computing platforms. The ever-increasing expansion of services and user requirements has necessitated the optimal use of resources. Therefore, several algorithms for optimal resource usage have been proposed to increase cloud performance and provide more satisfactory services to internet users. Typically, each algorithm is suitable for a specific environment, and their performance is affected by changes of the execution environment. Meanwhile, the assignment of tasks to resources in cloud computing is a difficult problem that NP-Complete. This paper proposes a novel deep reinforcement learning-based approach for task scheduling in cloud computing environments, aiming to minimize the makespan, which is the total time required to complete all tasks. The proposed approach utilizes a deep Q-learning algorithm to learn the near-optimal task allocation strategies based on the current state of the system. The task scheduling performance of the proposed approach is compared with the Min-Min, Min-Max, FCFS and GA algorithms based on three criteria: makespan, algorithm execution time, and computational complexity. Simulation results demonstrate that the proposed approach results in excellent makespan, while demonstrating very small algorithm execution time.
Shashank SwarupElhadi ShakshukiAnsar-Ul-Haque Yasar
Delong CuiZhiping PengKaibin LiQirui LiJieguang HeXiangwu Deng
Aryan SharmaBirendra NathPatrice RoySantanu Kr. Misra
Sudheer MangalampalliGanesh Reddy KarriMohit KumarOsama Ibrahim KhalafCarlos Andres Tavera RomeroGhaidaMuttashar Abdul Sahib
K SiddeshaG.V. JayaramaiahChandrapal Singh