Stratos: An End-to-End Distillation Pipeline for Customized LLMs Under Distributed Cloud Environments

Authors

  • Ziming Dai Tianjin University
  • Tuo Zhang University of Southern California
  • Fei Gao Tianjin University
  • Xingyi Cai Tianjin University
  • Xiaofei Wang Tianjin University
  • Cheng Zhang Tianjin University of Finance & Economics
  • Wenyu Wang Paiou Cloud Computing (Shanghai) Co., Ltd
  • Chengjie Zang Paiou Cloud Computing (Shanghai) Co., Ltd

DOI:

https://doi.org/10.1609/aaai.v40i47.41495

Abstract

The growing industrial demand for customized and cost-efficient large language models (LLMs) is fueled by the rise of vertical, domain-specific tasks and the need to optimize performance under constraints such as latency and budget. Knowledge distillation, as an efficient model compression and transfer technique, offers a feasible solution. However, existing distillation frameworks often require manual intervention and struggle to meet such complex user-defined distillation requirements. To bridge this gap, we propose Stratos, an end-to-end LLM distillation pipeline that automates server/model selection, knowledge distillation, and deployment in distributed cloud environments. Given user-defined constraints on model performance and system budget, Stratos automatically selects Pareto-optimal servers, dynamically matches teacher–student pairs, and adapts distillation strategies based on task complexity to optimize cloud hosting. Experiments show that Stratos produces a student model that achieves four times the accuracy of its GPT-4o teacher baseline on a rare, domain-specific Mahjong reasoning task with reverse synthetic data and knowledge injection. Moreover, it achieves reduced latency and cost without compromising accuracy. These results highlight its promise for vertical-domain LLM deployment.

Published

2026-03-14

How to Cite

Dai, Z., Zhang, T., Gao, F., Cai, X., Wang, X., Zhang, C., … Zang, C. (2026). Stratos: An End-to-End Distillation Pipeline for Customized LLMs Under Distributed Cloud Environments. Proceedings of the AAAI Conference on Artificial Intelligence, 40(47), 40506–40512. https://doi.org/10.1609/aaai.v40i47.41495

Issue

Section

IAAI Technical Track on Tools and Methodologies for Moving Faster and Safer