TaskLAMA: Probing the Complex Task Understanding of Language Models

Authors

  • Quan Yuan Google Research
  • Mehran Kazemi Google Research
  • Xin Xu Google Research
  • Isaac Noble Google Research
  • Vaiva Imbrasaite Google Research
  • Deepak Ramachandran Google Research

DOI:

https://doi.org/10.1609/aaai.v38i17.29918

Keywords:

NLP: (Large) Language Models, APP: Other Applications, NLP: Interpretability, Analysis, and Evaluation of NLP Models, PRS: Temporal Planning

Abstract

Structured Complex Task Decomposition (SCTD) is the problem of breaking down a complex real-world task (such as planning a wedding) into a directed acyclic graph over individual steps that contribute to achieving the task, with edges specifying temporal dependencies between steps. SCTD is an important component of assistive planning tools, and a challenge for commonsense reasoning systems. We probe how accurately SCTD can be done with the knowledge extracted from pre-trained Large Language Models (LLMs). We introduce a new high-quality human-annotated dataset for this problem and novel metrics to fairly assess performance of LLMs against several baselines. Our experiments reveal that LLMs are able to decompose complex tasks into individual steps effectively, with a relative improvement of 15% to 280% over the best baseline. We also propose a number of approaches to further improve their performance, with a relative improvement of 7% to 37%. However, we find that LLMs still struggle to predict pairwise temporal dependencies, which reveals a gap in their understanding of complex tasks.

Published

2024-03-24

How to Cite

Yuan, Q., Kazemi, M., Xu, X., Noble, I., Imbrasaite, V., & Ramachandran, D. (2024). TaskLAMA: Probing the Complex Task Understanding of Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19468-19476. https://doi.org/10.1609/aaai.v38i17.29918

Issue

Section

AAAI Technical Track on Natural Language Processing II