NeSTR: A Neuro-Symbolic Abductive Framework for Temporal Reasoning in Large Language Models

Authors

  • Feng Liang China Academy of Launch Vehicle Technology
  • Weixin Zeng National Key Laboratory of Big Data and Decision, National University of Defense Technology, China
  • Runhao Zhao National Key Laboratory of Big Data and Decision, National University of Defense Technology, China
  • Xiang Zhao National Key Laboratory of Big Data and Decision, National University of Defense Technology, China

DOI:

https://doi.org/10.1609/aaai.v40i38.40460

Abstract

Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of natural language processing tasks. However, temporal reasoning, particularly under complex temporal constraints, remains a major challenge. To this end, existing approaches have explored symbolic methods, which encode temporal structure explicitly, and reflective mechanisms, which revise reasoning errors through multi-step inference. Nonetheless, symbolic approaches often underutilize the reasoning capabilities of LLMs, while reflective methods typically lack structured temporal representations, which can result in inconsistent or hallucinated reasoning. As a result, even when the correct temporal context is available, LLMs may still misinterpret or misapply time-related information, leading to incomplete or inaccurate answers. To address these limitations, in this work, we propose Neuro-Symbolic Temporal Reasoning (NeSTR), a novel framework that integrates structured symbolic representations with hybrid reflective reasoning to enhance the temporal sensitivity of LLM inference. NeSTR preserves explicit temporal relations through symbolic encoding, enforces logical consistency via verification, and corrects flawed inferences using abductive reflection. Extensive experiments on diverse temporal question answering benchmarks demonstrate that NeSTR achieves superior zero-shot performance and consistently improves temporal reasoning without any fine-tuning, showcasing the advantage of neuro-symbolic integration in enhancing temporal understanding in large language models.

Downloads

Published

2026-03-14

How to Cite

Liang, F., Zeng, W., Zhao, R., & Zhao, X. (2026). NeSTR: A Neuro-Symbolic Abductive Framework for Temporal Reasoning in Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 31907–31915. https://doi.org/10.1609/aaai.v40i38.40460

Issue

Section

AAAI Technical Track on Natural Language Processing III