On Semantic Cognition, Inductive Generalization, and Language Models

Authors

  • Kanishka Misra Purdue University

DOI:

https://doi.org/10.1609/aaai.v36i11.21584

Keywords:

Cognitive Science, Natural Language Processing, Language Models, Concepts And Categories, Inductive Reasoning

Abstract

My doctoral research focuses on understanding semantic knowledge in neural network models trained solely to predict natural language (referred to as language models, or LMs), by drawing on insights from the study of concepts and categories grounded in cognitive science. I propose a framework inspired by 'inductive reasoning,' a phenomenon that sheds light on how humans utilize background knowledge to make inductive leaps and generalize from new pieces of information about concepts and their properties. Drawing from experiments that study inductive reasoning, I propose to analyze semantic inductive generalization in LMs using phenomena observed in human-induction literature, investigate inductive behavior on tasks such as implicit reasoning and emergent feature recognition, and analyze and relate induction dynamics to the learned conceptual representation space.

Downloads

Published

2022-06-28

How to Cite

Misra, K. (2022). On Semantic Cognition, Inductive Generalization, and Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12894-12895. https://doi.org/10.1609/aaai.v36i11.21584