Learning Rewards From Linguistic Feedback

Authors

  • Theodore R. Sumers Princeton University
  • Mark K. Ho Princeton University
  • Robert D. Hawkins Princeton University
  • Karthik Narasimhan Princeton University
  • Thomas L. Griffiths Princeton University

DOI:

https://doi.org/10.1609/aaai.v35i7.16749

Keywords:

Learning Human Values and Preferences, Human-Computer Interaction, Language Grounding & Multi-modal NLP, Social Cognition And Interaction

Abstract

We explore unconstrained natural language feedback as a learning signal for artificial agents. Humans use rich and varied language to teach, yet most prior work on interactive learning from language assumes a particular form of input (e.g., commands). We propose a general framework which does not make this assumption, instead using aspect-based sentiment analysis to decompose feedback into sentiment over the features of a Markov decision process. We then infer the teacher's reward function by regressing the sentiment on the features, an analogue of inverse reinforcement learning. To evaluate our approach, we first collect a corpus of teaching behavior in a cooperative task where both teacher and learner are human. We implement three artificial learners: sentiment-based "literal" and "pragmatic" models, and an inference network trained end-to-end to predict rewards. We then re-run our initial experiment, pairing human teachers with these artificial learners. All three models successfully learn from interactive human feedback. The inference network approaches the performance of the "literal" sentiment model, while the "pragmatic" model nears human performance. Our work provides insight into the information structure of naturalistic linguistic feedback as well as methods to leverage it for reinforcement learning.

Downloads

Published

2021-05-18

How to Cite

Sumers, T. R., Ho, M. K., Hawkins, R. D., Narasimhan, K., & Griffiths, T. L. (2021). Learning Rewards From Linguistic Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 6002-6010. https://doi.org/10.1609/aaai.v35i7.16749

Issue

Section

AAAI Technical Track on Humans and AI