Towards Determining How Deep Neural Models Learn to Reason

Authors

  • Anthony Marchiafava Oklahoma State University
  • Atriya Sen Oklahoma State University

DOI:

https://doi.org/10.1609/aaaiss.v5i1.35614

Abstract

Large Language Models (LLMs) are well known to perform poorly on tasks involving reasoning, including deductive, inductive, abductive, and spatial reasoning. This is evidenced by many existing benchmarks. While techniques such as chain-of-thought prompting and inference computation attempt to improve reasoning performance, it is necessary to assess whether large models are ultimately "memorizing" answers to the questions being used to assess their purportedly learnt reasoning capabilities. In this short paper, we describe a work in progress that aims to investigate the innate reasoning processes of large models by generating new questions that require deductive reasoning to answer. Initially, we train from scratch and then assess the ability of a recurrent deep neural model to make such a binary decision. We then discuss how our preliminary results bear on the hypothesis that some deep neural models can indeed learn to reason in the absence of memorization and semantic shortcuts, and conclude by discussing future work.

Downloads

Published

2025-05-28

How to Cite

Marchiafava, A., & Sen, A. (2025). Towards Determining How Deep Neural Models Learn to Reason. Proceedings of the AAAI Symposium Series, 5(1), 370–373. https://doi.org/10.1609/aaaiss.v5i1.35614

Issue

Section

Machine Learning and Knowledge Engineering for Trustworthy Multimodal and Generative AI (Position Papers)