Self-Supervised Logic Induction for Explainable Fuzzy Temporal Commonsense Reasoning
Keywords:SNLP: Applications, SNLP: Sentence-Level Semantics and Textual Inference
AbstractUnderstanding temporal commonsense concepts, such as times of occurrence and durations is crucial for event-centric language understanding. Reasoning about such temporal concepts in a complex context requires reasoning over both the stated context and the world knowledge that underlines it. A recent study shows massive pre-trained LM still struggle with such temporal reasoning under complex contexts (e.g., dialog) because they only implicitly encode the relevant contexts and fail to explicitly uncover the underlying logical compositions for complex inference, thus may not be robust enough. In this work, we propose to augment LMs with the temporal logic induction ability, which frames the temporal reasoning by defining three modular components: temporal dependency inducer and temporal concept defuzzifier and logic validator. The former two components disentangle the explicit/implicit dependency between temporal concepts across context (before, after, ...) and the specific meaning of fuzzy temporal concepts, respectively, while the validator combines the intermediate reasoning clues for robust contextual reasoning about the temporal concepts. Extensive experimental results on TIMEDIAL, a challenging dataset for temporal reasoning over dialog, show that our method, Logic Induction Enhanced Contextualized TEmporal Reasoning (LECTER), can yield great improvements over the traditional language model for temporal reasoning.
How to Cite
Cai, B., Ding, X., Sun, Z., Qin, B., Liu, T., wang, B., & Shang, L. (2023). Self-Supervised Logic Induction for Explainable Fuzzy Temporal Commonsense Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12580-12588. https://doi.org/10.1609/aaai.v37i11.26481
AAAI Technical Track on Speech & Natural Language Processing