The Adaptive Q-Network for Recommendation Tasks with Dynamic Item Space

Authors

  • Jianxiang Zhu Shanghai University
  • Dandan Lai Shanghai University
  • Zhongcui Ma Shanghai University
  • Yaxin Peng Shanghai University

DOI:

https://doi.org/10.1609/aaai.v39i12.33467

Abstract

Reinforcement learning (RL) algorithms can improve recommendation performance by capturing long-term user-system interaction. However, current RL-based recommendation tasks seldom consider the dynamism of the environment, and standard RL algorithms are ineffective in recommending items dynamically. In addressing these issues, we design a novel task termed dynamic recommendation, which takes the emergence of real-world recommendable items into consideration. Meanwhile, we propose Adaptive Q-Network (AdaQN) to tackle the dynamic recommendation task. Firstly, AdaQN predicts the value of different action characteristics, particularly during the testing phase, which can capture emerging new action characteristics. The above procedure helps AdaQN in effectively adapting to the dynamic action space. Secondly, AdaQN establishes a stable mapping that projects the discrete action space onto a continuous characteristic space. Finally, AdaQN employs a lightweight Q-network design, which mitigates the complexity of the optimization process. Extensive experiments demonstrate that our approach has achieved state-of-the-art performance in the dynamic recommendation task.

Downloads

Published

2025-04-11

How to Cite

Zhu, J., Lai, D., Ma, Z., & Peng, Y. (2025). The Adaptive Q-Network for Recommendation Tasks with Dynamic Item Space. Proceedings of the AAAI Conference on Artificial Intelligence, 39(12), 13437–13445. https://doi.org/10.1609/aaai.v39i12.33467

Issue

Section

AAAI Technical Track on Data Mining & Knowledge Management II