Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey

Authors

  • Prajjwal Bhargava University of Texas at Dallas
  • Vincent Ng University of Texas at Dallas

DOI:

https://doi.org/10.1609/aaai.v36i11.21496

Keywords:

Natural Language Processing, Text Mining

Abstract

While commonsense knowledge acquisition and reasoning has traditionally been a core research topic in the knowledge representation and reasoning community, recent years have seen a surge of interest in the natural language processing community in developing pre-trained models and testing their ability to address a variety of newly designed commonsense knowledge reasoning and generation tasks. This paper presents a survey of these tasks, discusses the strengths and weaknesses of state-of-the-art pre-trained models for commonsense reasoning and generation as revealed by these tasks, and reflects on future research directions.

Downloads

Published

2022-06-28

How to Cite

Bhargava, P., & Ng, V. (2022). Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12317-12325. https://doi.org/10.1609/aaai.v36i11.21496