JOURNAL ARTICLE

Hierarchical Multi-Agent Deep Reinforcement Learning for Energy-Efficient Hybrid Computation Offloading

Hang ZhouYusi LongShimin GongKun ZhuDinh Thai HoangDusit Niyato

Year: 2022 Journal:   IEEE Transactions on Vehicular Technology Vol: 72 (1)Pages: 986-1001   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Mobile edge computing (MEC) provides an economical way for the resource-constrained edge users to offload computational workload to MEC servers co-located with the access point (AP). In this article, we consider a hybrid computation offloading scheme that allows edge users to offload workloads by using active RF communications and backscatter communications. We aim to maximize the overall energy efficiency subject to the completion of all workload by jointly optimizing the AP's beamforming and the users' offloading decisions. Considering a dynamic environment, we propose a hierarchical multi-agent deep reinforcement learning (H-MADRL) framework to solve this problem. The high-level agent resides in the AP and optimizes the beamforming strategy, while the low-level user agents learn and adapt individuals' offloading strategies. To further improve the learning efficiency, we propose a novel optimization-driven learning algorithm that allows the AP to estimate the low-level users' actions by solving approximate optimization problem efficiently. Then, the action estimation can be shared with all users and drive them to update individuals' actions independently. Simulation results reveal that our algorithm can improve the system performance by 50%. The learning efficiency and reliability are also improved significantly comparing to the model-free learning methods.

Keywords:
Reinforcement learning Computer science Mobile edge computing Computation offloading Server Efficient energy use Edge computing Distributed computing Reliability (semiconductor) Workload Enhanced Data Rates for GSM Evolution Optimization problem Artificial intelligence Computer network Engineering

Metrics

24
Cited By
5.14
FWCI (Field Weighted Citation Impact)
43
Refs
0.93
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Mobile Crowdsensing and Crowdsourcing
Physical Sciences →  Computer Science →  Computer Science Applications
Energy Harvesting in Wireless Networks
Physical Sciences →  Engineering →  Electrical and Electronic Engineering

Related Documents

JOURNAL ARTICLE

Multi-Agent Deep Reinforcement Learning for Efficient Computation Offloading in Mobile Edge Computing

Tianzhe JiaoXiaoyue FengChaopeng GuoDongqi WangJie Song

Journal:   Computers, materials & continua/Computers, materials & continua (Print) Year: 2023 Vol: 76 (3)Pages: 3585-3603
JOURNAL ARTICLE

Computation Offloading via Multi-Agent Deep Reinforcement Learning in Aerial Hierarchical Edge Computing Systems

Yuanyuan WangChi ZhangTaiheng GeMiao Pan

Journal:   IEEE Transactions on Network Science and Engineering Year: 2024 Vol: 11 (6)Pages: 5253-5266
JOURNAL ARTICLE

Hierarchical Multi-Agent Deep Reinforcement Learning for Backscatter-aided Data Offloading

Hang ZhouYusi LongWenjie ZhangJing XuShimin Gong

Journal:   2022 IEEE Wireless Communications and Networking Conference (WCNC) Year: 2022 Pages: 542-547
© 2026 ScienceGate Book Chapters — All rights reserved.