Learning Conditional Generative Models for Temporal Point Processes


  • Shuai Xiao Shanghai Jiao Tong University
  • Hongteng Xu Duke University
  • Junchi Yan Shanghai Jiao Tong University
  • Mehrdad Farajtabar Georgia Institute of Technology
  • Xiaokang Yang Shanghai Jiao Tong University
  • Le Song Georgia Institute of Technology
  • Hongyuan Zha Georgia Institute of Technology




Temporal Point Processes


Estimating the future event sequence conditioned on current observations is a long-standing and challenging task in temporal analysis. On one hand for many real-world problems the underlying dynamics can be very complex and often unknown. This renders the traditional parametric point process models often fail to fit the data for their limited capacity. On the other hand, long-term prediction suffers from the problem of bias exposure where the error accumulates and propagates to future prediction. Our new model builds upon the sequence to sequence (seq2seq) prediction network. Compared with parametric point process models, its modeling capacity is higher and has better flexibility for fitting real-world data. The main novelty of the paper is to mitigate the second challenge by introducing the likelihood-free loss based on Wasserstein distance between point processes, besides negative maximum likelihood loss used in the traditional seq2seq model. Wasserstein distance, unlike KL divergence i.e. MLE loss, is sensitive to the underlying geometry between samples and can robustly enforce close geometry structure between them. This technique is proven able to improve the vanilla seq2seq model by a notable margin on various tasks.




How to Cite

Xiao, S., Xu, H., Yan, J., Farajtabar, M., Yang, X., Song, L., & Zha, H. (2018). Learning Conditional Generative Models for Temporal Point Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12072