Chain-of-Instructions: Compositional Instruction Tuning on Large Language Models

Authors

  • Shirley Anugrah Hayati University of Minnesota - Twin Cities
  • Taehee Jung Amazon
  • Tristan Bodding-Long Amazon
  • Sudipta Kar Amazon
  • Abhinav Sethy Grammarly
  • Joo-Kyung Kim Amazon
  • Dongyeop Kang University of Minnesota

DOI:

https://doi.org/10.1609/aaai.v39i22.34574

Abstract

Fine-tuning large language models (LLMs) with a collection of large and diverse instructions has improved the model’s generalization to different tasks, even for unseen tasks. However, most existing instruction datasets include only single instructions, and they struggle to follow complex instructions composed of multiple subtasks. In this work, we propose a novel concept of compositional instructions called chain-of-instructions (CoI), where the output of one instruction becomes an input for the next like a chain. Unlike the conventional practice of solving single instruction tasks, our proposed method encourages a model to solve each subtask step by step until the final answer is reached. CoI-tuning (i.e., fine-tuning with CoI instructions) improves the model’s ability to handle instructions composed of multiple subtasks as well as unseen composite tasks such as multilingual summarization. Overall, our study find that simple CoI tuning of existing instruction data can provide consistent generalization to solve more complex, unseen, and longer chains of instructions.

Downloads

Published

2025-04-11

How to Cite

Hayati, S. A., Jung, T., Bodding-Long, T., Kar, S., Sethy, A., Kim, J.-K., & Kang, D. (2025). Chain-of-Instructions: Compositional Instruction Tuning on Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(22), 24005–24013. https://doi.org/10.1609/aaai.v39i22.34574

Issue

Section

AAAI Technical Track on Natural Language Processing I