Beyond Winning and Losing: Modeling Human Motivations and Behaviors with Vector-Valued Inverse Reinforcement Learning

Authors

  • Baoxiang Wang The Chinese University of Hong Kong
  • Tongfang Sun University of Washington
  • Xianjun Sam Zheng Deephow

DOI:

https://doi.org/10.1609/aiide.v15i1.5244

Abstract

In recent years, reinforcement learning (RL) methods have been applied to model gameplay with great success, achieving super-human performance in various environments, such as Atari, Go and Poker. However, these studies mostly focus on winning the game and have largely ignored the rich and complex human motivations, which are essential for understanding humans’ diverse behavior. In this paper, we present a multi-motivation behavior model which investigates the multifaceted human motivations and learns the underlying value structure of the agents. Our approach extends inverse RL to vectored-valued rewards with Pareto optimality which significantly weakens the inverse RL assumption. Our model therefore incorporates a wider range of behavior that commonly appears in real-world environments. For practical assessment, our algorithm is tested on World of Warcraft datasets and demonstrates the improvement over existing methods.

Downloads

Published

2019-10-08

How to Cite

Wang, B., Sun, T., & Zheng, X. S. (2019). Beyond Winning and Losing: Modeling Human Motivations and Behaviors with Vector-Valued Inverse Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 15(1), 195-201. https://doi.org/10.1609/aiide.v15i1.5244