Preference Is More than Comparisons: Rethinking Dueling Bandits with Augmented Human Feedback

Authors

  • Shengbo Wang University of Electronic Science and Technology of China
  • Hong Sun University of Electronic Science and Technology of China
  • Ke Li University of Exeter

DOI:

https://doi.org/10.1609/aaai.v40i31.39852

Abstract

Interactive preference elicitation (IPE) aims to substantially reduce human effort while acquiring human preferences in wide personalization systems. Dueling bandit (DB) algorithms enable optimal decision-making in IPE building on pairwise comparisons. However, they remain inefficient when human feedback is sparse. Existing methods address sparsity by heavily relying on parametric reward models, whose rigid assumptions are vulnerable to misspecification. In contrast, we explore an alternative perspective based on feedback augmentation, and introduce critical improvements to the model-free DB framework. Specifically, we introduce augmented confidence bounds to integrate augmented human feedback under generalized concentration properties, and analyze the multi-factored performance trade-off via regret analysis. Our prototype algorithm achieves competitive performance across several IPE benchmarks, including recommendation, multi-objective optimization, and response optimization for large language models, demonstrating the potential of our approach for provably efficient IPE in broader applications.

Downloads

Published

2026-03-14

How to Cite

Wang, S., Sun, H., & Li, K. (2026). Preference Is More than Comparisons: Rethinking Dueling Bandits with Augmented Human Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 40(31), 26453–26461. https://doi.org/10.1609/aaai.v40i31.39852

Issue

Section

AAAI Technical Track on Machine Learning VIII