JOURNAL ARTICLE

Over-the-Air Computation Assisted Federated Learning with Progressive Training

Abstract

Federated learning (FL) with progressive training is a promising privacy-preserving and communication-efficient framework for edge intelligence applications. Specifically, by partitioning the global model into multiple sub-models and dividing the FL training into multiple stages, FL with progressive training enables the gradual training of a large model, thereby significantly reducing the transmission overhead without compromising learning performance. However, implementing FL with progressive training over wireless networks is hindered by the limited radio and energy resources. To address these issues, we adopt over-the-air computation (AirComp) to support FL with progressive training over wireless networks. By balancing the tradeoff between the AirComp transmission distortion and the transition efficiency of progressive training, we formulate a mixed-integer optimization problem with energy and power constraints, which is further decomposed into several subproblems via Lyapunov optimization. Subsequently, we develop a low computational-complexity algorithm that jointly optimizes transmit power, receive beamforming, and transition indicator in an alternating manner. Simulation results demonstrate the effectiveness of our optimization algorithm in improving the learning performance of the considered FL system.

Keywords:
Computer science Training (meteorology) Computation Artificial intelligence Multimedia Programming language

Metrics

1
Cited By
0.64
FWCI (Field Weighted Citation Impact)
16
Refs
0.68
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Privacy-Preserving Technologies in Data
Physical Sciences →  Computer Science →  Artificial Intelligence
Cryptography and Data Security
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.