Task-level Distributionally Robust Optimization for Large Language Model-based Dense Retrieval

Authors

  • Guangyuan Ma Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
  • Yongliang Ma Langboat Technology, Beijing, China
  • Xing Wu Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
  • Zhenpeng Su Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
  • Ming Zhou Langboat Technology, Beijing, China
  • Songlin Hu Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v39i23.34657

Abstract

Large Language Model-based Dense Retrieval (LLM-DR) optimizes over numerous heterogeneous fine-tuning collections from different domains. However, the discussion about its training data distribution is still minimal. Previous studies rely on empirically assigned dataset choices or sampling ratios, which inevitably lead to sub-optimal retrieval performances. In this paper, we propose a new task-level Distributionally Robust Optimization (tDRO) algorithm for LLM-DR fine-tuning, targeted at improving the universal domain generalization ability by end-to-end reweighting the data distribution of each task. The tDRO parameterizes the domain weights and updates them with scaled domain gradients. The optimized weights are then transferred to the LLM-DR fine-tuning to train more robust retrievers. Experiments show optimal improvements in large-scale retrieval benchmarks and reduce up to 30% dataset usage after applying our optimization algorithm with a series of different-sized LLM-DR models.

Published

2025-04-11

How to Cite

Ma, G., Ma, Y., Wu, X., Su, Z., Zhou, M., & Hu, S. (2025). Task-level Distributionally Robust Optimization for Large Language Model-based Dense Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24759–24767. https://doi.org/10.1609/aaai.v39i23.34657

Issue

Section

AAAI Technical Track on Natural Language Processing II