GPT4MTS: Prompt-based Large Language Model for Multimodal Time-series Forecasting

Authors

  • Furong Jia University of Southern California
  • Kevin Wang University of Southern California
  • Yixiang Zheng University of Southern California
  • Defu Cao University of Southern California
  • Yan Liu University of Southern California

DOI:

https://doi.org/10.1609/aaai.v38i21.30383

Keywords:

Large Language Models, Multi-modal Dataset, Time-series Forecasting, AI For Accessibility

Abstract

Time series forecasting is an essential area of machine learning with a wide range of real-world applications. Most of the previous forecasting models aim to capture dynamic characteristics from uni-modal numerical historical data. Although extra knowledge can boost the time series forecasting performance, it is hard to collect such information. In addition, how to fuse the multimodal information is non-trivial. In this paper, we first propose a general principle of collecting the corresponding textual information from different data sources with the help of modern large language models (LLM). Then, we propose a prompt-based LLM framework to utilize both the numerical data and the textual information simultaneously, named GPT4MTS. In practice, we propose a GDELT-based multimodal time series dataset for news impact forecasting, which provides a concise and well-structured version of time series dataset with textual information for further research in communication. Through extensive experiments, we demonstrate the effectiveness of our proposed method on forecasting tasks with extra-textual information.

Published

2024-03-24

How to Cite

Jia, F., Wang, K., Zheng, Y., Cao, D., & Liu, Y. (2024). GPT4MTS: Prompt-based Large Language Model for Multimodal Time-series Forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23343-23351. https://doi.org/10.1609/aaai.v38i21.30383

Issue

Section

EAAI: Mentored Undergraduate Research Challenge: AI for Accessibility in Comm