JOURNAL ARTICLE

A2C-DRL: Dynamic Scheduling for Stochastic Edge–Cloud Environments Using A2C and Deep Reinforcement Learning

Jialin LuJing YangShaobo LiYijun LiJiang WuJiangtian DaiJianjun Hu

Year: 2024 Journal:   IEEE Internet of Things Journal Vol: 11 (9)Pages: 16915-16927   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Resource management challenges frequently manifest in systems and networks as tough online decision tasks, for which the proper solution is dependent on an understanding of the workload and environment and facilitates smooth use of mobile edge and cloud resources. Due to the geographical dispersion of resources, constrained resource capacity, unpredictable nature of tasks, and network hierarchy present in such contexts, it is difficult to efficiently schedule jobs in edge environments. Unfortunately, existing heuristic-based methods lack generality and fast adaptability and thus cannot optimally solve such problems. The advantage actor–critic (A2C) method, on the one hand, can quickly adapt to dynamic circumstances based on relatively few data, and deep reinforcement learning (DRL) agents can on the other hand rapidly learn from their experience of environmental interactions to make better judgments. Therefore, we present an A2C-DRL real-time task scheduling technique for stochastic edge–cloud environments that enables decentralized learning and simultaneous work scheduling across multiple servers. With the aim of producing efficient scheduling decisions, we develop reward values for various resources and model the update policy, server resource scheduling method, and policy learning method. The model is adaptive and includes various hyperparameters that can be adjusted in accordance with the application requirements. We evaluate the load balancing capability of the model by introducing a load balancing factor. Experiments on real datasets show that the proposed A2C-DRL method outperforms seven state-of-the-art algorithms in terms of the reward value, task rejection, and the load balancing factor.

Keywords:
Computer science Reinforcement learning Cloud computing Scheduling (production processes) Distributed computing Dynamic priority scheduling Computer network Artificial intelligence Mathematical optimization Operating system Quality of service

Metrics

53
Cited By
44.35
FWCI (Field Weighted Citation Impact)
40
Refs
1.00
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Cloud Computing and Resource Management
Physical Sciences →  Computer Science →  Information Systems
Blockchain Technology Applications and Security
Physical Sciences →  Computer Science →  Information Systems

Related Documents

JOURNAL ARTICLE

Deep Reinforcement Learning for Dynamic Task Scheduling in Edge-Cloud Environments

D. Mamatha RaniK. P. SupreethiBipin Bihari Jayasingh

Journal:   International journal of electrical and computer engineering systems Year: 2024 Vol: 15 (10)Pages: 837-850
JOURNAL ARTICLE

EDGECLOUD-DRL: A DEEP REINFORCEMENT LEARNING-BASED TASK SCHEDULING FRAMEWORK FOR EDGE-CLOUD COMPUTING

Mohammed Waseem Ahme

Journal:   International Journal of Apllied Mathematics Year: 2025 Vol: 38 (6s)Pages: 839-866
JOURNAL ARTICLE

EADRL: Efficiency-aware adaptive deep reinforcement learning for dynamic task scheduling in edge-cloud environments

J. AnandB. Karthikeyan

Journal:   Results in Engineering Year: 2025 Vol: 27 Pages: 105890-105890
JOURNAL ARTICLE

Edge Cloud Resource Scheduling with Deep Reinforcement Learning

Yijun FengMing LiJiawen LiChangyuan Yu

Journal:   Radioengineering Year: 2025 Vol: 34 (1)Pages: 92-108
© 2026 ScienceGate Book Chapters — All rights reserved.