CoCo: Coordinated Container Scheduling with Last-Level Cache and Memory Bandwidth Partitioning

Last-level cache (LLC) and memory bandwidth partitioning are commonly used in existing work to meet QoS requirements of all co-scheduled latency-critical applications consolidated on a physical server. With the increasing popularity of cloud microservices and Function-as-a-Service paradigm, the number of containers consolidated together increases significantly. However, due to the limitation of hardware features, existing work fails to support such number of applications.

To bridge this gap, this project proposes CoCo, coordinated container scheduling with LLC and Memory bandwidth partitioning. Our quantitative evaluation shows that CoCo outperforms no-partitioning and baseline approaches by up to 920% and 9.4% respectively. Our main contributions are:

  • We present a comprehensive characterization on the impact of LLC and memory bandwidth partitioning, and the sensitivity to shared resource contention for popular services widely used in cloud microservices.
  • We propose CoCo, coordinated container scheduling with LLC and memory bandwidth partitioning. CoCo dynamically profiles consolidated workloads and provides a time-sharing algorithm (multi-level queue based weighted round robin algorithm) to maximize the overall performance and resource utilization.
  • We implemented a prototype of CoCo as a user-level runtime system on Linux, which requires no modification to the underlying OS or the consolidated applications.
  • We evaluated the effectiveness of CoCo which outperforms implemented baseline approaches. We also discussed the potential future work and limitations of CoCo.
Avatar
Haoran Qiu
Ph.D. Student in Computer Science

My research interests include distributed systems, machine learning and cloud computing.