Serverless Computing Has Grown Rapidly as a New Cloud Computing Paradigm that Promises Ease-Of-Management, Cost-Efficiency, and Auto-Scaling by Shipping Functions Via Self-Contained Virtualized Containers. Unfortunately, Serverless Computing Suffers from Severe Cold-Start Problems - -Starting Containers Incurs Non-Trivial Latency. Full Container Caching is Widely Applied to Mitigate Cold-Starts Yet Has Recently Been Outperformed by Two Lines of Research: Partial Container Caching and Container Sharing. However, Either Partial Container Caching or Container Sharing Techniques Exhibit their Drawbacks. Partial Container Caching Effectively Deals with Burstiness While Leaving Cold-Start Mitigation Halfway; Container Sharing Reduces Cold-Starts by Enabling Containers to Serve Multiple Functions While Suffering from Excessive Memory Waste Due to over-Packed Containers. This Paper Proposes Rainbow Cake, a Layer-Wise Container Pre-Warming and Keep-Alive Technique that Effectively Mitigates Cold-Starts with Sharing Awareness at Minimal Waste of Memory. with Structured Container Layers and Sharing-Aware Modeling, Rainbow Cake is Robust and Tolerant to Invocation Bursts. We Seize the Opportunity of Container Sharing Behind the Startup Process of Standard Container Techniques. Rainbow Cake Breaks the Container Startup Process of a Container into Three Stages and Manages Different Container Layers Individually. We Develop a Sharing-Aware Algorithm that Makes Event-Driven Layer-Wise Caching Decisions in Real-Time. Experiments on Open Whisk Clusters with Real-World Workloads Show that Rainbow Cake Reduces 68% Function Startup Latency and 77% Memory Waste Compared to State-Of-The-Art Solutions.


Computer Science

Publication Status

Open Access


National Science Foundation, Grant RINGS-2148309

Document Type

Article - Conference proceedings

Document Version


File Type





© 2024 Association for Computing Machinery (ACM), All rights reserved.

Publication Date

27 Apr 2024