Neural Utility Functions

Authors

  • Porter Jenkins Pennsylvania State University
  • Ahmad Farag Georgia Tech University
  • J. Stockton Jenkins Brigham Young University
  • Huaxiu Yao Pennsylvania State University
  • Suhang Wang Pennsylvania State University
  • Zhenhui Li Pennsylvania State University

DOI:

https://doi.org/10.1609/aaai.v35i9.16966

Keywords:

Learning Preferences or Rankings, Recommender Systems & Collaborative Filtering

Abstract

Current neural network architectures have no mechanism for explicitly reasoning about item trade-offs. Such trade-offs are important for popular tasks such as recommendation. The main idea of this work is to give neural networks inductive biases that are inspired by economic theories. To this end, we propose Neural Utility Functions, which directly optimize the gradients of a neural network so that they are more consistent with utility theory, a mathematical framework for modeling choice among items. We demonstrate that Neural Utility Functions can recover theoretical item relationships better than vanilla neural networks, analytically show existing neural networks are not quasi-concave and do not inherently reason about trade-offs, and that augmenting existing models with a utility loss function improves recommendation results. The Neural Utility Functions we propose are theoretically motivated, and yield strong empirical results.

Downloads

Published

2021-05-18

How to Cite

Jenkins, P., Farag, A., Jenkins, J. S., Yao, H., Wang, S., & Li, Z. (2021). Neural Utility Functions. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7917-7925. https://doi.org/10.1609/aaai.v35i9.16966

Issue

Section

AAAI Technical Track on Machine Learning II