Controlling Equational Reasoning in Large Language Models with Prompt Interventions

Authors

  • Jordan Meadows University of Manchester
  • Marco Valentino Idiap Research Institute
  • André Freitas University of Manchester Idiap Research Institute National Biomarker Centre, CRUK-MI

DOI:

https://doi.org/10.1609/aaai.v39i23.34668

Abstract

This paper investigates how hallucination rates in Large Language Models (LLMs) may be controlled via a symbolic data generation framework, exploring a fundamental relationship between the rate of certain mathematical errors and types of input intervention. Specifically, we systematically generate data for a derivation generation task using a symbolic engine, applying targeted interventions to prompts to perturb features of mathematical derivations such as the surface forms of symbols, equational tree structures, and mathematical context. We then evaluate the effect of prompt interventions across a range of LLMs including fine-tuned T5 models, GPT, and LLaMa-based models. Our experiments suggest that T5-Large can outperform the few-shot performance of GPT-4 on various evaluation sets generated via the framework. However, an extensive evaluation based on human analysis, template-based error detection, and text generation metrics reveals model weaknesses beyond what the reference-based metrics singularly describe. We use these results to tie characteristic distributional footprints of interventions to the human evaluation of LLM derivation quality, potentially leading to significant control over fine-grained mathematical capabilities of language models with respect to specific types of errors.

Published

2025-04-11

How to Cite

Meadows, J., Valentino, M., & Freitas, A. (2025). Controlling Equational Reasoning in Large Language Models with Prompt Interventions. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24858–24866. https://doi.org/10.1609/aaai.v39i23.34668

Issue

Section

AAAI Technical Track on Natural Language Processing II