JOURNAL ARTICLE

Deep Reinforcement Learning Based on Actor-Critic for Task Offloading in Vehicle Edge Computing

Abstract

With the rapid advancement of the Internet of Vehicles (IoV), IoV is facing the challenge of providing connectivity and high-quality services for vehicles. Mobile edge computing (MEC) is used to overcome the problem that vehicles may produce abundant data that cannot be processed in time. The MEC server has greater computing power than the vehicle. A model for offloading vehicle edge computing (VEC) tasks using deep reinforcement learning (DRL) techniques is presented. This work may be delegated to the VEC server's high computer capability because of the vehicle itself. The VEC server uses the resource reservation server (RRS) as the resource reservation of the VEC server. We combine reinforcement learning (RL) with deep learning (DL) to make better the convergence efficiency, we also use the Actor Critical algorithm in order to improve the training efficiency of the model. The simulation results show that our DRL algorithm based on Actor Critic can effectively improve the efficiency of VEC servers and reduce vehicle costs.

Keywords:
Reinforcement learning Reservation Computer science Mobile edge computing Server Edge computing Enhanced Data Rates for GSM Evolution Convergence (economics) Task (project management) Distributed computing Computer network Artificial intelligence Engineering

Metrics

2
Cited By
0.33
FWCI (Field Weighted Citation Impact)
20
Refs
0.55
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Wireless Communication Technologies
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Advanced MIMO Systems Optimization
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.