Learning Rewards From Linguistic Feedback
Keywords:Learning Human Values and Preferences, Human-Computer Interaction, Language Grounding & Multi-modal NLP, Social Cognition And Interaction
AbstractWe explore unconstrained natural language feedback as a learning signal for artificial agents. Humans use rich and varied language to teach, yet most prior work on interactive learning from language assumes a particular form of input (e.g., commands). We propose a general framework which does not make this assumption, instead using aspect-based sentiment analysis to decompose feedback into sentiment over the features of a Markov decision process. We then infer the teacher's reward function by regressing the sentiment on the features, an analogue of inverse reinforcement learning. To evaluate our approach, we first collect a corpus of teaching behavior in a cooperative task where both teacher and learner are human. We implement three artificial learners: sentiment-based "literal" and "pragmatic" models, and an inference network trained end-to-end to predict rewards. We then re-run our initial experiment, pairing human teachers with these artificial learners. All three models successfully learn from interactive human feedback. The inference network approaches the performance of the "literal" sentiment model, while the "pragmatic" model nears human performance. Our work provides insight into the information structure of naturalistic linguistic feedback as well as methods to leverage it for reinforcement learning.
How to Cite
Sumers, T. R., Ho, M. K., Hawkins, R. D., Narasimhan, K., & Griffiths, T. L. (2021). Learning Rewards From Linguistic Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 6002-6010. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16749
AAAI Technical Track on Humans and AI