TY - JOUR AU - Du, Yuanqi AU - Guo, Xiaojie AU - Cao, Hengning AU - Ye, Yanfang AU - Zhao, Liang PY - 2022/06/28 Y2 - 2024/03/28 TI - Disentangled Spatiotemporal Graph Generative Models JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 6 SE - AAAI Technical Track on Machine Learning I DO - 10.1609/aaai.v36i6.20607 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20607 SP - 6541-6549 AB - Spatiotemporal graph represents a crucial data structure where the nodes and edges are embedded in a geometric space and their attribute values can evolve dynamically over time. Nowadays, spatiotemporal graph data is becoming increasingly popular and important, ranging from microscale (e.g. protein folding), to middle-scale (e.g. dynamic functional connectivity), to macro-scale (e.g. human mobility network). Although disentangling and understanding the correlations among spatial, temporal, and graph aspects have been a long-standing key topic in network science, they typically rely on network processes hypothesized by human knowledge. They usually fit well towards the properties that the predefined principles are tailored for, but usually cannot do well for the others, especially for many key domains where the human has yet very limited knowledge such as protein folding and biological neuronal networks. In this paper, we aim at pushing forward the modeling and understanding of spatiotemporal graphs via new disentangled deep generative models. Specifically, a new Bayesian model is proposed that factorizes spatiotemporal graphs into spatial, temporal, and graph factors as well as the factors that explain the interplay among them. A variational objective function and new mutual information thresholding algorithms driven by information bottleneck theory have been proposed to maximize the disentanglement among the factors with theoretical guarantees. Qualitative and quantitative experiments on both synthetic and real-world datasets demonstrate the superiority of the proposed model over the state-of-the-arts by up to 69.2% for graph generation and 41.5% for interpretability. ER -