Harnessing Multimodal Large Language Models for Multimodal Sequential Recommendation

Authors

  • Yuyang Ye Rutgers University
  • Zhi Zheng University of Science and Technology of China
  • Yishan Shen University of Pennsylvania
  • Tianshu Wang Bytedance Inc.
  • Hengruo Zhang Bytedance Inc.
  • Peijun Zhu Georgia Institute of Technology
  • Runlong Yu University of Pittsburgh
  • Kai Zhang University of Science and Technology of China
  • Hui Xiong Hong Kong University of Science and Technology (Guangzhou)

DOI:

https://doi.org/10.1609/aaai.v39i12.33426

Abstract

Recent advances in Large Language Models (LLMs) have demonstrated significant potential in the field of Recommendation Systems (RSs). Most existing studies have focused on converting user behavior logs into textual prompts and leveraging techniques such as prompt tuning to enable LLMs for recommendation tasks. Meanwhile, research interest has recently grown in multimodal recommendation systems that integrate data from images, text, and other sources using modality fusion techniques. This introduces new challenges to the existing LLM-based recommendation paradigm which relies solely on text modality information. Moreover, although Multimodal Large Language Models (MLLMs) capable of processing multi-modal inputs have emerged, how to equip MLLMs with multi-modal recommendation capabilities remains largely unexplored. To this end, in this paper, we propose the Multimodal Large Language Model-enhanced Sequential Multimodal Recommendation (MLLM-MSR) model. To capture the dynamic user preference, we design a two-stage user preference summarization method. Specifically, we first utilize an MLLM-based item-summarizer to extract image feature given an item and convert the image into text. Then, we employ a recurrent user preference summarization generation paradigm to capture the dynamic changes in user preferences based on an LLM-based user-summarizer. Finally, to enable the MLLM for multi-modal recommendation task, we propose to fine-tune a MLLM-based recommender using Supervised Fine-Tuning (SFT) techniques. Extensive evaluations across various datasets validate the effectiveness of MLLM-MSR, showcasing its superior ability to capture and adapt to the evolving dynamics of user preferences.

Downloads

Published

2025-04-11

How to Cite

Ye, Y., Zheng, Z., Shen, Y., Wang, T., Zhang, H., Zhu, P., … Xiong, H. (2025). Harnessing Multimodal Large Language Models for Multimodal Sequential Recommendation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(12), 13069–13077. https://doi.org/10.1609/aaai.v39i12.33426

Issue

Section

AAAI Technical Track on Data Mining & Knowledge Management II