Fast Offline Policy Optimization for Large Scale Recommendation

Authors

  • Otmane Sakhi Criteo ENSAE, IPP
  • David Rohde Criteo
  • Alexandre Gilotte Criteo

DOI:

https://doi.org/10.1609/aaai.v37i8.26158

Keywords:

ML: Scalability of ML Systems, DMKM: Recommender Systems, RU: Sequential Decision Making

Abstract

Personalised interactive systems such as recommender systems require selecting relevant items from massive catalogs dependent on context. Reward-driven offline optimisation of these systems can be achieved by a relaxation of the discrete problem resulting in policy learning or REINFORCE style learning algorithms. Unfortunately, this relaxation step requires computing a sum over the entire catalogue making the complexity of the evaluation of the gradient (and hence each stochastic gradient descent iterations) linear in the catalogue size. This calculation is untenable in many real world examples such as large catalogue recommender systems, severely limiting the usefulness of this method in practice. In this paper, we derive an approximation of these policy learning algorithms that scale logarithmically with the catalogue size. Our contribution is based upon combining three novel ideas: a new Monte Carlo estimate of the gradient of a policy, the self normalised importance sampling estimator and the use of fast maximum inner product search at training time. Extensive experiments show that our algorithm is an order of magnitude faster than naive approaches yet produces equally good policies.

Downloads

Published

2023-06-26

How to Cite

Sakhi, O., Rohde, D., & Gilotte, A. (2023). Fast Offline Policy Optimization for Large Scale Recommendation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9686-9694. https://doi.org/10.1609/aaai.v37i8.26158

Issue

Section

AAAI Technical Track on Machine Learning III