Importance-Aware Data Selection for Efficient LLM Instruction Tuning

Authors

  • Tingyu Jiang Alibaba Cloud Computing
  • Shen Li Alibaba Cloud Computing
  • Yiyao Song Alibaba Cloud Computing
  • Lan Zhang Independent Researcher
  • Hualei Zhu Alibaba Cloud Computing
  • Yuan Zhao Alibaba Cloud Computing
  • Xiaohang Xu Graduate School of Information Science and Technology, The University of Tokyo
  • Kenjiro Taura Graduate School of Information Science and Technology, The University of Tokyo
  • Hao Henry Wang Alibaba Cloud Computing

DOI:

https://doi.org/10.1609/aaai.v40i37.40396

Abstract

Instruction tuning plays a critical role in enhancing the performance and efficiency of Large Language Models (LLMs). Its success depends not only on the quality of the instruction data but also on the inherent capabilities of the LLM itself. Some studies suggest that even a small amount of high-quality data can achieve instruction fine-tuning results that are on par with, or even exceed, those from using a full-scale dataset. However, rather than focusing solely on calculating data quality scores to evaluate instruction data, there is a growing need to select high-quality data that maximally enhances the performance of instruction tuning for a given LLM. In this paper, we propose the Model Instruction Weakness Value (MIWV) as a novel metric to quantify the importance of instruction data in enhancing model's capabilities. The MIWV metric is derived from the discrepancies in the model’s responses when using In-Context Learning (ICL), helping identify the most beneficial data for enhancing instruction tuning performance. Our experimental results demonstrate that selecting only the top 1% of data based on MIWV can outperform training on the full dataset. Furthermore, this approach extends beyond existing research that focuses on data quality scoring for data selection, offering strong empirical evidence supporting the effectiveness of our proposed method.

Published

2026-03-14

How to Cite

Jiang, T., Li, S., Song, Y., Zhang, L., Zhu, H., Zhao, Y., Xu, X., Taura, K., & Wang, H. H. (2026). Importance-Aware Data Selection for Efficient LLM Instruction Tuning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(37), 31328-31336. https://doi.org/10.1609/aaai.v40i37.40396

Issue

Section

AAAI Technical Track on Natural Language Processing II