Learning Vector Autoregressive Models With Latent Processes

Authors

  • Saber Salehkaleybar University of Illinois at Urbana-Champaign
  • Jalal Etesami University of Illinois at Urbana-Champaign
  • Negar Kiyavash University of Illinois at Urbana-Champaign
  • Kun Zhang Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v32i1.11603

Keywords:

Graphical model learning, Causal structures, VAR models

Abstract

We study the problem of learning the support of transition matrix between random processes in a Vector Autoregressive (VAR) model from samples when a subset of the processes are latent. It is well known that ignoring the effect of the latent processes may lead to very different estimates of the influences among observed processes, and we are concerned with identifying the influences among the observed processes, those between the latent ones, and those from the latent to the observed ones. We show that the support of transition matrix among the observed processes and lengths of all latent paths between any two observed processes can be identified successfully under some conditions on the VAR model. From the lengths of latent paths, we reconstruct the latent subgraph (representing the influences among the latent processes) with a minimum number of variables uniquely if its topology is a directed tree. Furthermore, we propose an algorithm that finds all possible minimal latent graphs under some conditions on the lengths of latent paths. Our results apply to both non-Gaussian and Gaussian cases, and experimental results on various synthetic and real-world datasets validate our theoretical results.

Downloads

Published

2018-04-29

How to Cite

Salehkaleybar, S., Etesami, J., Kiyavash, N., & Zhang, K. (2018). Learning Vector Autoregressive Models With Latent Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11603