BLADE: Enhancing Black-Box Large Language Models with Small Domain-Specific Models

Authors

  • Haitao Li Tsinghua University
  • Qingyao Ai Tsinghua University
  • Jia Chen Xiaohongshu Inc.
  • Qian Dong Tsinghua University
  • Zhijing Wu Beijing Institute of Technology
  • Yiqun Liu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v39i23.34620

Abstract

Large Language Models (LLMs) like ChatGPT and GPT-4 are versatile and capable of addressing open-domain question-answering(QA) tasks effectively. However, general LLMs, which are developed on open-domain data, may lack the domain-specific knowledge essential for tasks in vertical domains, such as legal, medical, etc. To address this issue, previous approaches either conduct continuous pre-training with domain-specific data or employ retrieval augmentation to support general LLMs in handling QA tasks. Unfortunately, these strategies are either cost-intensive or unreliable in practical applications. To this end, we present a novel framework named BLADE, which enhances Black-box LArge language models with small Domain-spEcific models. BLADE consists of a black-box LLM and a small domain-specific LM. The small LM preserves domain-specific knowledge and offers specialized insights, while the general LLM contributes robust language comprehension and reasoning capabilities. Specifically, our method involves three steps: 1) pre-training the small LM with domain-specific data, 2) fine-tuning this model using knowledge instruction data, and 3) joint Bayesian optimization of the general LLM and the small LM. In our experiments, we verify the effectiveness of BLADE on diverse LLMs and datasets across different domains. This shows the potential of BLADE as an effective and cost-efficient solution in adapting general LLMs for vertical domains.

Downloads

Published

2025-04-11

How to Cite

Li, H., Ai, Q., Chen, J., Dong, Q., Wu, Z., & Liu, Y. (2025). BLADE: Enhancing Black-Box Large Language Models with Small Domain-Specific Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24422–24430. https://doi.org/10.1609/aaai.v39i23.34620

Issue

Section

AAAI Technical Track on Natural Language Processing II