Benchmarking Large Language Models on Controllable Generation under Diversified Instructions

Authors

  • Yihan Chen University of Science and Technology of China
  • Benfeng Xu University of Science and Technology of China
  • Quan Wang MOE Key Laboratory of Trustworthy Distributed Computing and Service, Beijing University of Posts and Telecommunications
  • Yi Liu State Key Laboratory of Communication Content Cognition, People’s Daily Online, Beijing, China
  • Zhendong Mao University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v38i16.29734

Keywords:

NLP: (Large) Language Models, NLP: Generation, NLP: Interpretability, Analysis, and Evaluation of NLP Models

Abstract

While large language models (LLMs) have exhibited impressive instruction-following capabilities, it is still unclear whether and to what extent they can respond to explicit constraints that might be entailed in various instructions. As a significant aspect of LLM alignment, it is thus important to formulate such a specialized set of instructions as well as investigate the resulting behavior of LLMs. To address this vacancy, we propose a new benchmark CoDI-Eval to systematically and comprehensively evaluate LLMs' responses to instructions with various constraints. We construct a large collection of constraints-attributed instructions as a test suite focused on both generalization and coverage. Specifically, we advocate an instruction diversification process to synthesize diverse forms of constraint expression and also deliberate the candidate task taxonomy with even finer-grained sub-categories. Finally, we automate the entire evaluation process to facilitate further developments. Different from existing studies on controllable text generation, CoDI-Eval extends the scope to the prevalent instruction-following paradigm for the first time. We provide extensive evaluations of representative LLMs (e.g., ChatGPT, Vicuna) on CoDI-Eval, revealing their limitations in following instructions with specific constraints and there is still a significant gap between open-source and commercial closed-source LLMs. We believe this benchmark will facilitate research into improving the controllability of LLMs' responses to instructions. Our data and code are available at https://github.com/Xt-cyh/CoDI-Eval.

Published

2024-03-24

How to Cite

Chen, Y., Xu, B., Wang, Q., Liu, Y., & Mao, Z. (2024). Benchmarking Large Language Models on Controllable Generation under Diversified Instructions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17808-17816. https://doi.org/10.1609/aaai.v38i16.29734

Issue

Section

AAAI Technical Track on Natural Language Processing I