JOURNAL ARTICLE

Energy-Efficient Computation Offloading Based on Multiagent Deep Reinforcement Learning for Industrial Internet of Things Systems

Samira ChouikhiMoez EsseghirLeïla Merghem‐Boulahia

Year: 2023 Journal:   IEEE Internet of Things Journal Vol: 11 (7)Pages: 12228-12239   Publisher: Institute of Electrical and Electronics Engineers

Abstract

The term Industrial Internet of Things (IIoT) was created to describe a specific area of the Internet of Things (IoT) that integrates information and communication technologies (ICTs) like cloud/edge computing, wireless sensor/actuator networks, and connected objects to enable and accelerate the development of Industry 4.0. IIoT applications (e.g., smart manufacturing, remote control of industrial machinery, and critical system monitoring) have various levels of criticality and Quality-of-Service (QoS) requirements. However, the characteristics of data collected by interconnected devices complicate the task of guaranteeing the QoS requirements in terms of latency and reliability in addition to the huge amount of energy consumption. As a potential solution, edge computing offers additional powerful resources in the proximity of the IIoT devices. Hence, the required QoS can be achieved by offloading computation-intensive tasks to edge servers. Moreover, the offloading process needs to be optimized to take full advantage. Unlikely, conventional optimization methods are very complex to be applied in the IIoT context. To overcome this issue, we propose a computation offloading approach based on deep reinforcement learning (DRL) to minimize long-term energy consumption and maximize the number of tasks completed before their tolerant deadlines. We introduce a system with multiple agents to deal with the increasing dimension of the action space, where each IIoT device is represented by its own DRL model. The goal of the model is to maximize a flexible and long-term reward. In addition, the DRL models are trained in the cloud and make decisions online in the edge servers, allowing quick decision making by avoiding iterative online optimization procedures. The performance of the proposed approach is evaluated through simulation. The proposal shows promising results compared to other approaches.

Keywords:
Computer science Reinforcement learning Cloud computing Distributed computing Edge computing Computation offloading Energy consumption Quality of service Server Edge device Context (archaeology) Computer network Artificial intelligence Engineering

Metrics

22
Cited By
9.67
FWCI (Field Weighted Citation Impact)
27
Refs
0.96
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Modular Robots and Swarm Intelligence
Physical Sciences →  Engineering →  Mechanical Engineering
Mobile Crowdsensing and Crowdsourcing
Physical Sciences →  Computer Science →  Computer Science Applications

Related Documents

BOOK-CHAPTER

Energy-Efficient Multi-UAV-Enabled Computation Offloading for Industrial Internet of Things via Deep Reinforcement Learning

Shuo ShiMeng WangXuemai Gu

Lecture notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering Year: 2021 Pages: 295-305
JOURNAL ARTICLE

Deep Reinforcement Learning Based Computation Offloading in Fog Enabled Industrial Internet of Things

Yijing RenYaohua SunMugen Peng

Journal:   IEEE Transactions on Industrial Informatics Year: 2020 Vol: 17 (7)Pages: 4978-4987
JOURNAL ARTICLE

Multitask Multiobjective Deep Reinforcement Learning-Based Computation Offloading Method for Industrial Internet of Things

Jun CaiHongtian FuYan Liu

Journal:   IEEE Internet of Things Journal Year: 2022 Vol: 10 (2)Pages: 1848-1859
JOURNAL ARTICLE

Dual-Q network deep reinforcement learning-based computation offloading method for industrial internet of things

Ruizhong DuJinru WuYan Gao

Journal:   The Journal of Supercomputing Year: 2024 Vol: 80 (17)Pages: 25590-25615
© 2026 ScienceGate Book Chapters — All rights reserved.