Text-Based Interactive Recommendation via Offline Reinforcement Learning

Authors

  • Ruiyi Zhang Duke University
  • Tong Yu Samsung Research America
  • Yilin Shen Samsung Research America
  • Hongxia Jin Samsung Research America

DOI:

https://doi.org/10.1609/aaai.v36i10.21424

Keywords:

Speech & Natural Language Processing (SNLP), Humans And AI (HAI)

Abstract

Interactive recommendation with natural-language feedback can provide richer user feedback and has demonstrated advantages over traditional recommender systems. However, the classical online paradigm involves iteratively collecting experience via interaction with users, which is expensive and risky. We consider an offline interactive recommendation to exploit arbitrary experience collected by multiple unknown policies. A direct application of policy learning with such fixed experience suffers from the distribution shift. To tackle this issue, we develop a behavior-agnostic off-policy correction framework to make offline interactive recommendation possible. Specifically, we leverage the conservative Q-function to perform off-policy evaluation, which enables learning effective policies from fixed datasets without further interactions. Empirical results on the simulator derived from real-world datasets demonstrate the effectiveness of our proposed offline training framework.

Downloads

Published

2022-06-28

How to Cite

Zhang, R., Yu, T., Shen, Y., & Jin, H. (2022). Text-Based Interactive Recommendation via Offline Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 11694-11702. https://doi.org/10.1609/aaai.v36i10.21424

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing