Temporal Latent Auto-Encoder: A Method for Probabilistic Multivariate Time Series Forecasting

Authors

  • Nam Nguyen IBM Research
  • Brian Quanz IBM Research

DOI:

https://doi.org/10.1609/aaai.v35i10.17101

Keywords:

Time-Series/Data Streams, (Deep) Neural Network Algorithms, Neural Generative Models & Autoencoders, Scalability of ML Systems

Abstract

Probabilistic forecasting of high dimensional multivariate time series is a notoriously challenging task, both in terms of computational burden and distribution modeling. Most previous work either makes simple distribution assumptions or abandons modeling cross-series correlations. A promising line of work exploits scalable matrix factorization for latent-space forecasting, but is limited to linear embeddings, unable to model distributions, and not trainable end-to-end when using deep learning forecasting. We introduce a novel temporal latent auto-encoder method which enables nonlinear factorization of multivariate time series, learned end-to-end with a temporal deep learning latent space forecast model. By imposing a probabilistic latent space model, complex distributions of the input series are modeled via the decoder. Extensive experiments demonstrate that our model achieves state-of-the-art performance on many popular multivariate datasets, with gains sometimes as high as 50% for several standard metrics.

Downloads

Published

2021-05-18

How to Cite

Nguyen, N., & Quanz, B. (2021). Temporal Latent Auto-Encoder: A Method for Probabilistic Multivariate Time Series Forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 9117-9125. https://doi.org/10.1609/aaai.v35i10.17101

Issue

Section

AAAI Technical Track on Machine Learning III