Cui ZhangWenjun ZhangQiong WuPingyi FanQiang FanJiangzhou WangKhaled B. Letaief
Federated learning (FL) can protect the privacy of the vehicles in vehicle edge computing (VEC) to a certain extent through sharing the gradients of vehicles' local models instead of the local data. The gradients of vehicles' local models are usually large for the vehicular artificial intelligence (AI) applications, thus transmitting such large gradients would cause large per-round latency. Gradient quantization has been proposed as one effective approach to reduce the per-round latency in FL enabled VEC through compressing gradients and reducing the number of bits, i.e., the quantization level, to transmit gradients. The selection of quantization level and thresholds determines the quantization error (QE), which further affects the model accuracy and training time. To do so, the total training time and QE become two key metrics for the FL enabled VEC. It is critical to jointly optimize the total training time and QE for the FL enabled VEC. However, the time-varying channel condition causes more challenges to solve this problem. In this article, we propose a distributed deep reinforcement learning (DRL)-based quantization level allocation scheme to optimize the long-term reward in terms of the total training time and QE. Extensive simulations identify the optimal weighted factors between the total training time and QE, and demonstrate the feasibility and effectiveness of the proposed scheme.
Qiong WuSiyuan WangPingyi FanQiang Fan
Huan ZhouHao WangZhiwen YuBin GuoMingjun XiaoJie Wu
Hao WangHuan ZhouMingze LitLiang ZhaoVictor C. M. Leung
Sihui ZhengYuhan DongXiang Chen
Tianze LiuTiankui ZhangJonathan LooYapeng Wang