ATRank: An Attention-Based User Behavior Modeling Framework for Recommendation


  • Chang Zhou Alibaba Group
  • Jinze Bai Peking University
  • Junshuai Song Peking University
  • Xiaofei Liu Alibaba Group
  • Zhengchao Zhao Alibaba Group
  • Xiusi Chen Peking University
  • Jun Gao Peking University



User Modeling, Attention Model, Recommendation


A user can be represented as what he/she does along the history. A common way to deal with the user modeling problem is to manually extract all kinds of aggregated features over the heterogeneous behaviors, which may fail to fully represent the data itself due to limited human instinct. Recent works usually use RNN-based methods to give an overall embedding of a behavior sequence, which then could be exploited by the downstream applications. However, this can only preserve very limited information, or aggregated memories of a person. When a downstream application requires to facilitate the modeled user features, it may lose the integrity of the specific highly correlated behavior of the user, and introduce noises derived from unrelated behaviors. This paper proposes an attention based user behavior modeling framework called ATRank, which we mainly use for recommendation tasks. Heterogeneous user behaviors are considered in our model that we project all types of behaviors into multiple latent semantic spaces, where influence can be made among the behaviors via self-attention. Downstream applications then can use the user behavior vectors via vanilla attention. Experiments show that ATRank can achieve better performance and faster training process. We further explore ATRank to use one unified model to predict different types of user behaviors at the same time, showing a comparable performance with the highly optimized individual models.




How to Cite

Zhou, C., Bai, J., Song, J., Liu, X., Zhao, Z., Chen, X., & Gao, J. (2018). ATRank: An Attention-Based User Behavior Modeling Framework for Recommendation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).