Madhavi JaivalKatya MkrtchyanAdam Kaplan
Due to its cost-effectiveness and limited scope of administration, Serverless Computing has fast become a favorite cloud computing execution model. Meanwhile, with the rise of distributed cloud architectures and microservices in the last decade, many development teams have adopted the principles of Chaos Engineering. This allows them to assess the effects of random failures or delays on an application. In prior literature, serverless developers measured and reported cold-start penalties and transaction latency, whereas Chaos Engineers have studied security and resiliency in cloud infrastructure. In this work, we combine these approaches to measure the performance of a set of serverless cloud functions which implement common server-side file and database operations. We study each function's performance response under a set of controlled chaos experiments, wherein we emulate various client load conditions, as well as inject random delays into the function execution. We find that under heavy 1000-client load, the longest-latency operations can provide as much as 36.5% improvement to overall response time by failing early.
Cecilia CalavaroValeria CardelliniFrancesco Lo PrestiGabriele Russo Russo
Lavi Ben-ShimolDanielle LaviEitan KlevanskyOleg BrodtDudu MimranYuval EloviciAsaf Shabtai