Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning


  • Subhabrata Dutta Indian Institute of Technology Delhi
  • Ishan Pandey Indraprastha Institute of Information Technology Delhi
  • Joykirat Singh Indraprastha Institute of Information Technology Delhi
  • Sunny Manchanda DYSL-AI, India
  • Soumen Chakrabarti Indian Institute of Technology Bombay
  • Tanmoy Chakraborty Indian Institute of Technology Delhi



NLP: (Large) Language Models, ML: Applications, NLP: Learning & Optimization for NLP


Large Language Models (LLM) exhibit zero-shot mathematical reasoning capacity as a behavior emergent with scale, commonly manifesting as chain-of-thoughts (CoT) reasoning. However, multiple empirical findings suggest that this prowess is exclusive to LLMs that have exorbitant sizes (beyond 50 billion parameters). Meanwhile, educational neuroscientists suggest that symbolic algebraic manipulation be introduced around the same time as arithmetic word problems so as to modularize language-to-formulation, symbolic manipulation of the formulation, and endgame arithmetic. In this paper, we start with the hypothesis that much smaller LMs, which are weak at multi-step reasoning, can achieve reasonable arithmetic reasoning if arithmetic word problems are posed as a formalize-then-solve task. In our architecture, which we call SyReLM, the LM serves the role of a translator to map natural language arithmetic questions into a formal language (FL) description. A symbolic solver then evaluates the FL expression to obtain the answer. A small frozen LM, equipped with an efficient low-rank adapter, is capable of generating FL expressions that incorporate natural language descriptions of the arithmetic problem (e.g., variable names and their purposes, formal expressions combining variables, etc.). We adopt policy-gradient reinforcement learning to train the adapted LM, informed by the non-differentiable symbolic solver. This marks a sharp departure from the recent development in tool-augmented LLMs, in which the external tools (e.g., calculator, Web search, etc.) are essentially detached from the learning phase of the LM. SyReLM shows massive improvements (e.g., +30.65 absolute point improvement in accuracy on the SVAMP dataset using GPT-J 6B model) over base LMs, while keeping our testbed easy to diagnose and interpret, and within the reach of most researchers.



How to Cite

Dutta, S., Pandey, I., Singh, J., Manchanda, S., Chakrabarti, S., & Chakraborty, T. (2024). Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17951-17959.



AAAI Technical Track on Natural Language Processing I