Inferring Lexicographically-Ordered Rewards from Preferences

Authors

  • Alihan Hüyük University of Cambridge
  • William R. Zame University of California, Los Angeles
  • Mihaela van der Schaar University of Cambridge University of California, Los Angeles The Alan Turing Institute

DOI:

https://doi.org/10.1609/aaai.v36i5.20516

Keywords:

Knowledge Representation And Reasoning (KRR)

Abstract

Modeling the preferences of agents over a set of alternatives is a principal concern in many areas. The dominant approach has been to find a single reward/utility function with the property that alternatives yielding higher rewards are preferred over alternatives yielding lower rewards. However, in many settings, preferences are based on multiple—often competing—objectives; a single reward function is not adequate to represent such preferences. This paper proposes a method for inferring multi-objective reward-based representations of an agent's observed preferences. We model the agent's priorities over different objectives as entering lexicographically, so that objectives with lower priorities matter only when the agent is indifferent with respect to objectives with higher priorities. We offer two example applications in healthcare—one inspired by cancer treatment, the other inspired by organ transplantation—to illustrate how the lexicographically-ordered rewards we learn can provide a better understanding of a decision-maker's preferences and help improve policies when used in reinforcement learning.

Downloads

Published

2022-06-28

How to Cite

Hüyük, A., Zame, W. R., & Schaar, M. van der. (2022). Inferring Lexicographically-Ordered Rewards from Preferences. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5737-5745. https://doi.org/10.1609/aaai.v36i5.20516

Issue

Section

AAAI Technical Track on Knowledge Representation and Reasoning