Autoscaling

From Models to Operators: Rethinking Autoscaling Granularity for Large Generative Models

Serving large generative models such as LLMs and multi- modal transformers requires balancing user-facing SLOs (e.g., time-to-first-token, time-between-tokens) with provider goals of efficiency and cost reduction. Existing solutions rely on static …

ModServe: Modality- and Stage-Aware Resource Disaggregation for Scalable Multimodal Model Serving

Large multimodal models (LMMs) demonstrate impressive capabilities in understanding images, videos, and audio beyond text. However, efficiently serving LMMs in production environments poses significant challenges due to their complex architectures …