U-BERT: Pre-training User Representations for Improved Recommendation

Authors

  • Zhaopeng Qiu Tencent Medical AI Lab
  • Xian Wu Tencent Medical AI Lab
  • Jingyue Gao Peking University
  • Wei Fan Tencent Medical AI Lab

DOI:

https://doi.org/10.1609/aaai.v35i5.16557

Keywords:

Recommender Systems & Collaborative Filtering

Abstract

Learning user representation is a critical task for recommendation systems as it can encode user preference for personalized services. User representation is generally learned from behavior data, such as clicking interactions and review comments. However, for less popular domains, the behavior data is insufficient to learn precise user representations. To deal with this problem, a natural thought is to leverage content-rich domains to complement user representations. Inspired by the recent success of BERT in NLP, we propose a novel pre-training and fine-tuning based approach U-BERT. Different from typical BERT applications, U-BERT is customized for recommendation and utilizes different frameworks in pre-training and fine-tuning. In pre-training, U-BERT focuses on content-rich domains and introduces a user encoder and a review encoder to model users' behaviors. Two pre-training strategies are proposed to learn the general user representations; In fine-tuning, U-BERT focuses on the target content-insufficient domains. In addition to the user and review encoders inherited from the pre-training stage, U-BERT further introduces an item encoder to model item representations. Besides, a review co-matching layer is proposed to capture more semantic interactions between the reviews of the user and item. Finally, U-BERT combines user representations, item representations and review interaction information to improve recommendation performance. Experiments on six benchmark datasets from different domains demonstrate the state-of-the-art performance of U-BERT.

Downloads

Published

2021-05-18

How to Cite

Qiu, Z., Wu, X., Gao, J., & Fan, W. (2021). U-BERT: Pre-training User Representations for Improved Recommendation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(5), 4320-4327. https://doi.org/10.1609/aaai.v35i5.16557

Issue

Section

AAAI Technical Track on Data Mining and Knowledge Management