Putting People in LLMs’ Shoes: Generating Better Answers via Question Rewriter

Authors

  • Junhao Chen Osaka University
  • Bowen Wang Osaka University
  • Zhouqiang Jiang Osaka University
  • Yuta Nakashima Osaka University

DOI:

https://doi.org/10.1609/aaai.v39i22.34527

Abstract

Large Language Models (LLMs) have demonstrated significant capabilities, particularly in the domain of question answering (QA). However, their effectiveness in QA is often undermined by the vagueness of user questions. To address this issue, we introduce single-round instance-level prompt optimization, referred to as question rewriter. By enhancing the intelligibility of human questions for black-box LLMs, our question rewriter improves the quality of generated answers. The rewriter is optimized using direct preference optimization based on feedback collected from automatic criteria for evaluating generated answers; therefore, its training does not require costly human annotations. The experiments across multiple black-box LLMs and long-form question answering (LFQA) datasets demonstrate the efficacy of our method. This paper provides a practical framework for training question rewriters and sets a precedent for future explorations in prompt optimization within LFQA tasks.

Downloads

Published

2025-04-11

How to Cite

Chen, J., Wang, B., Jiang, Z., & Nakashima, Y. (2025). Putting People in LLMs’ Shoes: Generating Better Answers via Question Rewriter. Proceedings of the AAAI Conference on Artificial Intelligence, 39(22), 23577-23585. https://doi.org/10.1609/aaai.v39i22.34527

Issue

Section

AAAI Technical Track on Natural Language Processing I