Making Natural Language Reasoning Explainable and Faithful
DOI:
https://doi.org/10.1609/aaai.v38i20.30280Keywords:
Reasoning And Explanations, Faithfulness And FactualityAbstract
Neural models, including large language models (LLMs), achieve superior performance on logical reasoning tasks such as question answering. To elicit reasoning capabilities from LLMs, recent works propose using the chain-of-thought (CoT) mechanism to generate both the reasoning chain and the answer, which enhances the model’s capabilities in conducting reasoning. However, due to LLM’s uninterpretable nature and the extreme flexibility of free-form explanations, several challenges remain: such as struggling with inaccurate reasoning, hallucinations, and not aligning with human preferences. In this talk, we will focus on (1) our design of leveraging structured information (that is grounded to the context), for the explainable complex question answering and reasoning; (2) our multi-module interpretable framework for inductive reasoning, which conducts step-wise faithful reasoning with iterative feedback.Downloads
Published
2024-03-24
How to Cite
Du, X. (2024). Making Natural Language Reasoning Explainable and Faithful. Proceedings of the AAAI Conference on Artificial Intelligence, 38(20), 22664-22664. https://doi.org/10.1609/aaai.v38i20.30280
Issue
Section
New Faculty Highlights