Importance Weighting Can Help Large Language Models Self-Improve

Authors

  • Chunyang Jiang Hong Kong University of Science and Technology
  • Chi-Min Chan Hong Kong University of Science and Technology
  • Wei Xue Hong Kong University of Science and Technology
  • Qifeng Liu Hong Kong University of Science and Technology
  • Yike Guo Hong Kong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v39i23.34602

Abstract

Large language models (LLMs) have shown remarkable capability in numerous tasks and applications. However, fine-tuning LLMs using high-quality datasets under external supervision remains prohibitively expensive. In response, LLM self-improvement approaches have been vibrantly developed recently. The typical paradigm of LLM self-improvement involves training LLM on self-generated data, part of which may be detrimental and should be filtered out due to the unstable data quality. While current works primarily employs filtering strategies based on answer correctness, in this paper, we demonstrate that filtering out correct but with high distribution shift extent (DSE) samples could also benefit the results of self-improvement. Given that the actual sample distribution is usually inaccessible, we propose a new metric called DS weight to approximate DSE, inspired by the Importance Weighting methods. Consequently, we integrate DS weight with self-consistency to comprehensively filter the self-generated samples and fine-tune the language model. Experiments show that with only a tiny valid set (up to 5% size of the training set) to compute DS weight, our approach can notably promote the reasoning ability of current LLM self-improvement methods. The resulting performance is on par with methods that rely on external supervision from pre-trained reward models.

Downloads

Published

2025-04-11

How to Cite

Jiang, C., Chan, C.-M., Xue, W., Liu, Q., & Guo, Y. (2025). Importance Weighting Can Help Large Language Models Self-Improve. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24257–24265. https://doi.org/10.1609/aaai.v39i23.34602

Issue

Section

AAAI Technical Track on Natural Language Processing II