Haoran Qiu | Microsoft AzRS
Home
Publications
Awards
Services
Experiences
Contact
Esha Choukse
Latest
ModServe: Modality- and Stage-Aware Resource Disaggregation for Scalable Multimodal Model Serving
Medha: Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations
Towards Efficient Large Multimodal Model Serving
TAPAS: Thermal- and Power-Aware Scheduling for LLM Inference in Cloud Platforms
SmartOClock: Workload- and Risk-Aware Overclocking in the Cloud
Cite
×