Zitha SasindranHarsha YelchuriT. V. Prabhakar
Our proposed resource-efficient semi-asynchronous federated learning (RE-SAFL) approach presents a comprehensive and effective solution for training large models such as Automatic Speech Recognition (ASR) models in a distributed and semi-asynchronous manner. In our research, we highlight the importance of employing a resource-efficient work allocation approach when deploying complex tasks such as ASR in real-time on edge devices such as mobile phones. To validate our approach, we conducted experiments on a real FL test-bed using Android-based mobile devices. By addressing the resource constraints of client devices and optimizing work allocation, our RE-SAFL framework opens up new possibilities for training large models in semi-asynchronous federated environments.
Ji LiuTianshi CheYang ZhouRuoming JinHuaiyu DaiDejing DouPatrick Valduriez
Yajie ZhouXiaoyi PangZhibo WangJiahui HuPeng SunKui Ren
Zhoubin KouYun JiDanni YangSheng ZhangXiaoxiong Zhong
Yunming LiaoYang XuHongli XuMin ChenLun WangChunming Qiao
Ling LiCheng GuoXinyu TangKim-Kwang Raymond ChooYining Liu