JOURNAL ARTICLE

Resource Allocation for Distributed Machine Learning at the Edge-Cloud Continuum

Abstract

Edge computing has emerged as a paradigm for local computing/processing tasks, reducing the distances over which data transfers are made. Thus, an opportunity is presented for data transfer-intensive, distributed machine learning. In this paper we develop a solution for serving distributed Machine Learning (ML) training jobs at the edge– cloud continuum. We model the specific requirements of each ML job, and the features of the edge and cloud resources. Next, we develop an Integer Linear Programming algorithm to perform the resource allocation. We examine different scenarios (different processing and bandwidth costs) and quantify tradeoffs related to performance and cost of edge/cloud bandwidth and processing resources. Our simulations indicate that even though there are many parameters that determine the allocation, the processing costs seem to play on average the most important role. The cloud b/w costs can be significant in certain scenarios. Finally, in certain examined cases, significant monetary benefits can be achieved through the collaboration of both edge and cloud resources when compared to using exclusively edge or cloud resources.

Keywords:
Cloud computing Computer science Distributed computing Enhanced Data Rates for GSM Evolution Resource allocation Bandwidth (computing) Edge computing Edge device Artificial intelligence Computer network Operating system

Metrics

8
Cited By
2.00
FWCI (Field Weighted Citation Impact)
23
Refs
0.85
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

IoT and Edge/Fog Computing
Physical Sciences →  Computer Science →  Computer Networks and Communications
Blockchain Technology Applications and Security
Physical Sciences →  Computer Science →  Information Systems
IoT Networks and Protocols
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.