Hierarchical Reinforcement Learning for Course Recommendation in MOOCs

Authors

  • Jing Zhang Renmin University of China
  • Bowen Hao Renmin University of China
  • Bo Chen Renmin University of China
  • Cuiping Li Renmin University of China
  • Hong Chen Renmin University of China
  • Jimeng Sun Georgia Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v33i01.3301435

Abstract

The proliferation of massive open online courses (MOOCs) demands an effective way of personalized course recommendation. The recent attention-based recommendation models can distinguish the effects of different historical courses when recommending different target courses. However, when a user has interests in many different courses, the attention mechanism will perform poorly as the effects of the contributing courses are diluted by diverse historical courses. To address such a challenge, we propose a hierarchical reinforcement learning algorithm to revise the user profiles and tune the course recommendation model on the revised profiles.

Systematically, we evaluate the proposed model on a real dataset consisting of 1,302 courses, 82,535 users and 458,454 user enrolled behaviors, which were collected from XuetangX—one of the largest MOOCs in China. Experimental results show that the proposed model significantly outperforms the state-of-the-art recommendation models (improving 5.02% to 18.95% in terms of HR@10).

Downloads

Published

2019-07-17

How to Cite

Zhang, J., Hao, B., Chen, B., Li, C., Chen, H., & Sun, J. (2019). Hierarchical Reinforcement Learning for Course Recommendation in MOOCs. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 435-442. https://doi.org/10.1609/aaai.v33i01.3301435

Issue

Section

AAAI Technical Track: AI and the Web