Inference-Aware Prompt Optimization for Aligning Black-Box Large Language Models
DOI:
https://doi.org/10.1609/aaai.v40i38.40521Abstract
Prompt optimization methods have demonstrated significant effectiveness in aligning black-box large language models (LLMs). In parallel, inference scaling strategies such as Best-of-N Sampling and Majority Voting have likewise been shown to improve alignment and performance by trading additional computation for better output. However, existing prompt optimization approaches are inference strategy agnostic; that is, they optimize prompts without accounting for the inference strategy. This constitutes a significant methodological gap, as our empirical and theoretical analysis reveals a strong interdependence between these two paradigms. Moreover, we find that user preferences regarding trade-offs among multiple objectives and inference budgets substantially influence the choice of prompt and inference configuration. To address this gap, we introduce a novel unified framework named IAPO (Inference-Aware Prompt Optimization) that jointly optimizes the prompt and inference scale, while being aware of the inference budget and different task objectives. We then develop a fixed-budget training algorithm for IAPO, called PSST (Prompt Scaling via Sequential Trimming), and establish finite-budget guarantees on the error probability. Finally, we evaluate the effectiveness of PSST on six tasks, including multi-objective text generation and reasoning, and demonstrate the critical role of incorporating inference-awareness in aligning black-box LLMs using prompt optimization.Downloads
Published
2026-03-14
How to Cite
Mahmud, S., Nakamura, M., Wray, K. H., & Zilberstein, S. (2026). Inference-Aware Prompt Optimization for Aligning Black-Box Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 32455–32464. https://doi.org/10.1609/aaai.v40i38.40521
Issue
Section
AAAI Technical Track on Natural Language Processing III