Cloud deployments with large centralized data centers at the network core has so far served as a good template for cost-effective deployment of Web services and content delivery applications. However, this traditional model is rather unsuitable for applications with more complex network and latency requirements. The distributed cloud, where the large centralized data centers at the network core are supplanted or augmented by a set of smaller geographically distributed data centers at the network edge, can deliver such applications more efficiently. In this paper, we focus on the problem of delivering the distributed cloud by (1) providing a characterization of the main requirements that need to be met in order to realize the distributed cloud, (2) introducing an experimental management and orchestration system for the distributed cloud, called Wind, that comprehensively addresses each of the underlined challenges, and (3) quantifying the advantages of delivering the distributed cloud via Wind in a large scale simulated setting. Collected results show that Wind can help reduce the overall latency between applications and users, and the bandwidth cost to deliver an application in the distributed cloud significantly.
Konstantinos KontodimasPolyzois SoumplisAristotelis KretsisPanagiotis KokkinosMarcell FehérDaniel E. LucaniEmmanouel Varvarigos
Dapeng DongHuanhuan XiongGabriel G. CastañéPaul StackJohn P. Morrison
Rafael WeingärtnerGabriel Beims BräscherCarlos Becker Westphall
Ian LimE. Coleen CoolidgePaul Hourani
Jorge EjarqueJ. Álvarez Cid-FuentesRaül SirventRosa M. BadíaHenar Muñoz