Reinforcement Learning Based Dynamic Model Combination for Time Series Forecasting

Authors

  • Yuwei Fu McGill University
  • Di Wu McGill University
  • Benoit Boulet McGill University

DOI:

https://doi.org/10.1609/aaai.v36i6.20618

Keywords:

Machine Learning (ML)

Abstract

Time series data appears in many real-world fields such as energy, transportation, communication systems. Accurate modelling and forecasting of time series data can be of significant importance to improve the efficiency of these systems. Extensive research efforts have been taken for time series problems. Different types of approaches, including both statistical-based methods and machine learning-based methods, have been investigated. Among these methods, ensemble learning has shown to be effective and robust. However, it is still an open question that how we should determine weights for base models in the ensemble. Sub-optimal weights may prevent the final model from reaching its full potential. To deal with this challenge, we propose a reinforcement learning (RL) based model combination (RLMC) framework for determining model weights in an ensemble for time series forecasting tasks. By formulating model selection as a sequential decision-making problem, RLMC learns a deterministic policy to output dynamic model weights for non-stationary time series data. RLMC further leverages deep learning to learn hidden features from raw time series data to adapt fast to the changing data distribution. Extensive experiments on multiple real-world datasets have been implemented to showcase the effectiveness of the proposed method.

Downloads

Published

2022-06-28

How to Cite

Fu, Y., Wu, D., & Boulet, B. (2022). Reinforcement Learning Based Dynamic Model Combination for Time Series Forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6639-6647. https://doi.org/10.1609/aaai.v36i6.20618

Issue

Section

AAAI Technical Track on Machine Learning I