Training with “Paraphrasing the Original Text” Teaches LLM to Better Retrieve in Long-Context Tasks

Authors

  • Yijiong Yu Tsinghua University
  • Yongfeng Huang Tsinghua University Zhongguancun Laboratory Institute for Precision Medicine, Tsinghua University
  • Zhixiao Qi Tsinghua University
  • Zhe Zhou Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v39i24.34767

Abstract

As Large Language Models (LLMs) continue to evolve, more are being designed to handle long-context inputs. Despite this advancement, most of them still face challenges in accurately handling long-context tasks, often showing the "lost in the middle" issue. We identify that insufficient retrieval capability is one of the important reasons for this issue. To tackle this challenge, we propose a novel approach to design training data for long-context tasks, aiming at augmenting LLMs' proficiency in extracting key information from long context. Specially, we incorporate an additional part named "paraphrasing the original text" when constructing the answer of training samples and then fine-tuning the model. Experimenting on LongBench and NaturalQuestions Multi-document-QA dataset with models of Llama and Qwen series, our method achieves an improvement of up to 8.48% and 4.48% in average scores, respectively, showing effectiveness in improving the model’s performance on long-context tasks.

Published

2025-04-11

How to Cite

Yu, Y., Huang, Y., Qi, Z., & Zhou, Z. (2025). Training with “Paraphrasing the Original Text” Teaches LLM to Better Retrieve in Long-Context Tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 25751–25759. https://doi.org/10.1609/aaai.v39i24.34767

Issue

Section

AAAI Technical Track on Natural Language Processing III