A Comprehensive Evaluation on Event Reasoning of Large Language Models

Authors

  • Zhengwei Tao School of Computer Science, Peking University MoE Key Lab. of High Confidence Software Technologies(PKU), China Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong MoE Key Lab. of High Confidence Software Technologies(Hong Kong), China
  • Zhi Jin School of Computer Science, Peking University MoE Key Lab. of High Confidence Software Technologies(PKU), China
  • Yifan Zhang School of Computer Science, Peking University MoE Key Lab. of High Confidence Software Technologies(PKU), China
  • Xiancai Chen School of Computer Science, Peking University MoE Key Lab. of High Confidence Software Technologies(PKU), China
  • Haiyan Zhao School of Computer Science, Peking University MoE Key Lab. of High Confidence Software Technologies(PKU), China
  • Jia Li School of Computer Science, Peking University MoE Key Lab. of High Confidence Software Technologies(PKU), China
  • Bin Liang Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong MoE Key Lab. of High Confidence Software Technologies(Hong Kong), China
  • Chongyang Tao Beihang University
  • Qun Liu Huawei Noah's Ark Lab
  • Kam-Fai Wong Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong MoE Key Lab. of High Confidence Software Technologies(Hong Kong), China

DOI:

https://doi.org/10.1609/aaai.v39i24.34714

Abstract

Event reasoning is a fundamental ability that underlies many applications. It requires event schema knowledge to perform global reasoning and needs to deal with the diversity of the inter-event relations and the reasoning paradigms. The extent to which LLMs excel in event reasoning across various relations and reasoning paradigms has not been thoroughly investigated. Additionally, it is still unclear whether LLMs utilize event knowledge in the same way humans do. To mitigate this disparity, we comprehensively evaluate the abilities of event reasoning of LLMs on different relations, paradigms, and levels of abstraction. We introduce a novel benchmark EV2 for EValuation of EVent reasoning. EV2 consists of two levels of evaluation on schema and instance and is comprehensive in relations and reasoning paradigms. We conduct extensive experiments on EV2. We find that 1) LLMs have abilities to accomplish event reasoning but their performances are far from satisfactory. 2) There are imbalances of event reasoning abilities on different relations and paradigms. 3) LLMs have event schema knowledge, however, they're not aligned with humans on how to utilize the knowledge. Based on these findings, we guide the LLMs in utilizing the event schema knowledge as memory leading to improvements in event reasoning.

Downloads

Published

2025-04-11

How to Cite

Tao, Z., Jin, Z., Zhang, Y., Chen, X., Zhao, H., Li, J., … Wong, K.-F. (2025). A Comprehensive Evaluation on Event Reasoning of Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 25273–25281. https://doi.org/10.1609/aaai.v39i24.34714

Issue

Section

AAAI Technical Track on Natural Language Processing III