GPT4MTS: Prompt-based Large Language Model for Multimodal Time-series Forecasting


  • Furong Jia University of Southern California
  • Kevin Wang University of Southern California
  • Yixiang Zheng University of Southern California
  • Defu Cao University of Southern California
  • Yan Liu University of Southern California



Large Language Models, Multi-modal Dataset, Time-series Forecasting, AI For Accessibility


Time series forecasting is an essential area of machine learning with a wide range of real-world applications. Most of the previous forecasting models aim to capture dynamic characteristics from uni-modal numerical historical data. Although extra knowledge can boost the time series forecasting performance, it is hard to collect such information. In addition, how to fuse the multimodal information is non-trivial. In this paper, we first propose a general principle of collecting the corresponding textual information from different data sources with the help of modern large language models (LLM). Then, we propose a prompt-based LLM framework to utilize both the numerical data and the textual information simultaneously, named GPT4MTS. In practice, we propose a GDELT-based multimodal time series dataset for news impact forecasting, which provides a concise and well-structured version of time series dataset with textual information for further research in communication. Through extensive experiments, we demonstrate the effectiveness of our proposed method on forecasting tasks with extra-textual information.



How to Cite

Jia, F., Wang, K., Zheng, Y., Cao, D., & Liu, Y. (2024). GPT4MTS: Prompt-based Large Language Model for Multimodal Time-series Forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23343-23351.



EAAI: Mentored Undergraduate Research Challenge: AI for Accessibility in Comm