JOURNAL ARTICLE

Deep Reinforcement Learning-based Quantization for Federated Learning

Abstract

Federated learning (FL) is a promising solution to harness the advances of machine learning under the premise of privacy security, whereas the communication overhead of model exchange remains an obstacle to deploying FL in wireless networks. To tackle this challenge, we consider the non-uniform quantization of the global model in this work. By formulating the optimization of quantization intervals as a Markov decision process (MDP), we propose a deep reinforcement learning (DRL)- based approach to improve the performance of the quantizer for FL. Through crafting a compound reward function, the DRL agent is guided to reduce the quantization error and training loss simultaneously. Furthermore, a dual time-scale mechanism between FL and DRL is adopted to ensure that the actor and critic models of DRL converge more steadily. Simulations on various real-world datasets reveal that the proposed method can provide higher accuracy and faster convergence than the existing uniform quantizers, and can retain these benefits when applying the learned policy to a similar learning task.

Keywords:
Reinforcement learning Computer science Markov decision process Quantization (signal processing) Artificial intelligence Overhead (engineering) Federated learning Distributed learning Machine learning Partially observable Markov decision process Premise Distributed computing Markov process Markov chain Markov model Algorithm Mathematics

Metrics

5
Cited By
1.28
FWCI (Field Weighted Citation Impact)
18
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Privacy-Preserving Technologies in Data
Physical Sciences →  Computer Science →  Artificial Intelligence
Wireless Communication Security Techniques
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Stochastic Gradient Optimization Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.