Query-Efficient Domain Knowledge Stealing Against Large Language Models

Authors

  • Zhengao Li Florida State University
  • Xiaopeng Yuan University of California, Los Angeles
  • Bolin Shen Florida State University
  • Kien Le Florida State University
  • Haohan Wang University of Illinois at Urbana-Champaign
  • Xugui Zhou Louisiana State University
  • Shangqian Gao Florida State University
  • Yushun Dong Florida State University

DOI:

https://doi.org/10.1609/aaai.v40i38.40456

Abstract

Large language models (LLMs) concentrate substantial knowledge in specialized domains due to extensive pretraining and instruction tuning, and they are now central to commercial and scientific practice. Yet access is usually limited to costly, rate-limited interfaces, which motivates methods that can extract targeted domain knowledge with minimal querying effort. A further challenge is that the target domain may be unknown in advance, so naive or generic prompts waste queries and fail to expose the underlying concepts and relations that structure the domain. In this work, we introduce a query-efficient approach for domain-specific knowledge stealing from black-box language models. Rather than issuing random questions or generic templates, our framework performs self-directed exploration that lets the model find the direction and mine domain knowledge by itself. Starting from a small and diverse seed, it discovers salient domain entities and induces their relations through structured question families that elicit definitional, functional, and compositional information. A feedback-driven controller analyzes the errors and uncertainty of the extracted surrogate model and uses this signal to refine subsequent queries, all without relying on prior domain knowledge or external resources. We evaluate the method in two expert-centric settings, medicine and finance, and observe consistently better performance while requiring significantly fewer queries.

Downloads

Published

2026-03-14

How to Cite

Li, Z., Yuan, X., Shen, B., Le, K., Wang, H., Zhou, X., … Dong, Y. (2026). Query-Efficient Domain Knowledge Stealing Against Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 31870–31878. https://doi.org/10.1609/aaai.v40i38.40456

Issue

Section

AAAI Technical Track on Natural Language Processing III