N. VijayalakshmiSagar GulatiBen SujinB. Madhav RaoK. Kiran Kumar
The increasing computational demand for real-time mobile applications has led to the development of mobile edge computing (MEC), with support from unmanned aerial vehicles (UAVs), as a promising paradigm for constructing high-throughput line-of-sight links for ground users and pushing computational resources to network edges. Users can reduce processing latency and the load on their local computers by delegating tasks to the UAV in its role as an edge server. The coverage capacity of a single UAV is, however, very limited. Moreover, it will be easy to intercept the data that is transferred to the unmanned aerial vehicle. Thus, for UAV-assisted mobile edge computing, we proposed a transmission technique based on multi-agent deep reinforcement learning in this study. The recommended approach to maximize UAV deployment first applies the particle swarm optimization algorithm. Then, deep reinforcement learning is utilized to optimize the secure offloading to maximize the system utility and minimize the quantity of information eavesdropping, taking into consideration different user task types with diverse preferences for processing time and residual energy of computing equipment. The results of the simulation demonstrate that, in comparison to the single-agent strategy and the benchmark, the multi-agent approach can optimize offloading more successfully and produce higher system utility.
Weidang LuYandan MoYunqi FengYuan GaoNan ZhaoYuan WuArumugam Nallanathan
Yingzheng ZhangJufang LiGuangchen MuXiaoyu Chen
Sangwon HwangJuseong ParkHoon LeeMintae KimInkyu Lee
Liang WangKezhi WangCunhua PanWei XuNauman AslamArumugam Nallanathan
Liang ZhangBijan JabbariNirwan Ansari