The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types


  • Gaurav R. Ghosal EECS Department, University of California, Berkeley
  • Matthew Zurek UW-Madison
  • Daniel S. Brown University of Utah
  • Anca D. Dragan EECS Department, University of California, Berkeley



HAI: Learning Human Values and Preferences, HAI: Human-in-the-Loop Machine Learning, ML: Imitation Learning & Inverse Reinforcement Learning


When inferring reward functions from human behavior (be it demonstrations, comparisons, physical corrections, or e-stops), it has proven useful to model the human as making noisy-rational choices, with a "rationality coefficient" capturing how much noise or entropy we expect to see in the human behavior. Prior work typically sets the rationality level to a constant value, regardless of the type, or quality, of human feedback. However, in many settings, giving one type of feedback (e.g. a demonstration) may be much more difficult than a different type of feedback (e.g. answering a comparison query). Thus, we expect to see more or less noise depending on the type of human feedback. In this work, we advocate that grounding the rationality coefficient in real data for each feedback type, rather than assuming a default value, has a significant positive effect on reward learning. We test this in both simulated experiments and in a user study with real human feedback. We find that overestimating human rationality can have dire effects on reward learning accuracy and regret. We also find that fitting the rationality coefficient to human data enables better reward learning, even when the human deviates significantly from the noisy-rational choice model due to systematic biases. Further, we find that the rationality level affects the informativeness of each feedback type: surprisingly, demonstrations are not always the most informative---when the human acts very suboptimally, comparisons actually become more informative, even when the rationality level is the same for both. Ultimately, our results emphasize the importance and advantage of paying attention to the assumed human-rationality-level, especially when agents actively learn from multiple types of human feedback.




How to Cite

Ghosal, G. R., Zurek, M., Brown, D. S., & Dragan, A. D. (2023). The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types. Proceedings of the AAAI Conference on Artificial Intelligence, 37(5), 5983-5992.



AAAI Technical Track on Humans and AI