Learning to Recommend from Sparse Data via Generative User Feedback

Authors

  • Wenlin Wang Duke University

DOI:

https://doi.org/10.1609/aaai.v35i5.16570

Keywords:

Recommender Systems & Collaborative Filtering, Neural Generative Models & Autoencoders, Reinforcement Learning

Abstract

Traditional collaborative filtering (CF) based recommender systems tend to perform poorly when the user-item interactions/ratings are highly scarce. To address this, we propose a learning framework that improves collaborative filtering with a synthetic feedback loop (CF-SFL) to simulate the user feedback. The proposed framework consists of a recommender and a virtual user. The recommender is formulated as a CF model, recommending items according to observed user preference. The virtual user estimates rewards from the recommended items and generates a feedback in addition to the observed user preference. The recommender connected with the virtual user constructs a closed loop, that recommends users with items and imitates the unobserved feedback of the users to the recommended items. The synthetic feedback is used to augment the observed user preference and improve recommendation results. Theoretically, such model design can be interpreted as inverse reinforcement learning, which can be learned effectively via rollout (simulation). Experimental results show that the proposed framework is able to enrich the learning of user preference and boost the performance of existing collaborative filtering methods on multiple datasets.

Downloads

Published

2021-05-18

How to Cite

Wang, W. (2021). Learning to Recommend from Sparse Data via Generative User Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 35(5), 4436-4444. https://doi.org/10.1609/aaai.v35i5.16570

Issue

Section

AAAI Technical Track on Data Mining and Knowledge Management