Exploring Equation as a Better Intermediate Meaning Representation for Numerical Reasoning of Large Language Models

Authors

  • Dingzirui Wang Harbin Institute of Technology
  • Longxu Dou Harbin Institute of Technology
  • Wenbin Zhang Yunfu Technology (Beijing) Co., Ltd.
  • Junyu Zeng Yunfu Technology (Beijing) Co., Ltd.
  • Wanxiang Che Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v38i17.29879

Keywords:

NLP: Question Answering, NLP: (Large) Language Models

Abstract

Numerical reasoning is a vital capability for natural language processing models to understand and process numerical information in real-world scenarios. Most current methods first generate the Intermediate Meaning Representations (IMRs) of questions and then generate answers. Current SOTA methods generate programs as IMRs with large language models (LLMs). Intuitively, equations have fewer restrictions and closer semantics to the question than programs, leading to higher generation accuracy. However, current LLMs generate equations worse than programs, where we assume that the equation data is rare in pre-training data compared to programs. So in this paper, we try to use equations as IMRs to solve the numerical reasoning task by addressing two problems: (1) Theoretically, how to prove that the equation is an IMR with higher generation accuracy than programs; (2) Empirically, how to improve the generation accuracy of equations with LLMs. For the first problem, we propose and prove a proposition to theoretically compare the generation accuracy of different IMRs. For the second problem, we present a method called Boosting Numerical ReasonIng by Decomposing the Generation of Equations Bridge, which can improve the accuracy of LLMs in generating equations as IMRs by reducing the tendency of generating constant expressions and programs. Our method improves the performance by 2.2%, 0.9%, and 1.7% on GSM8K, SVAMP, and Algebra datasets compared to the previous state-of-the-art methods under the single reasoning path setting. Our code and prompts are available at https://github.com/zirui-HIT/Bridge_for_Numerical_Reasoning}.

Published

2024-03-24

How to Cite

Wang, D., Dou, L., Zhang, W., Zeng, J., & Che, W. (2024). Exploring Equation as a Better Intermediate Meaning Representation for Numerical Reasoning of Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19116-19125. https://doi.org/10.1609/aaai.v38i17.29879

Issue

Section

AAAI Technical Track on Natural Language Processing II