Contextual-Bandit Based Personalized Recommendation with Time-Varying User Interests

Authors

  • Xiao Xu Cornell University
  • Fang Dong Alibaba Group
  • Yanghua Li Alibaba Group
  • Shaojian He Alibaba Group
  • Xin Li Alibaba Group

DOI:

https://doi.org/10.1609/aaai.v34i04.6125

Abstract

A contextual bandit problem is studied in a highly non-stationary environment, which is ubiquitous in various recommender systems due to the time-varying interests of users. Two models with disjoint and hybrid payoffs are considered to characterize the phenomenon that users' preferences towards different items vary differently over time. In the disjoint payoff model, the reward of playing an arm is determined by an arm-specific preference vector, which is piecewise-stationary with asynchronous and distinct changes across different arms. An efficient learning algorithm that is adaptive to abrupt reward changes is proposed and theoretical regret analysis is provided to show that a sublinear scaling of regret in the time length T is achieved. The algorithm is further extended to a more general setting with hybrid payoffs where the reward of playing an arm is determined by both an arm-specific preference vector and a joint coefficient vector shared by all arms. Empirical experiments are conducted on real-world datasets to verify the advantages of the proposed learning algorithms against baseline ones in both settings.

Downloads

Published

2020-04-03

How to Cite

Xu, X., Dong, F., Li, Y., He, S., & Li, X. (2020). Contextual-Bandit Based Personalized Recommendation with Time-Varying User Interests. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6518-6525. https://doi.org/10.1609/aaai.v34i04.6125

Issue

Section

AAAI Technical Track: Machine Learning