Infinity Learning: Learning Markov Chains from Aggregate Steady-State Observations


  • Jianfei Gao Purdue University
  • Mohamed A. Zahran Purdue University
  • Amit Sheoran Purdue University
  • Sonia Fahmy Purdue University
  • Bruno Ribeiro Purdue University



We consider the task of learning a parametric Continuous Time Markov Chain (CTMC) sequence model without examples of sequences, where the training data consists entirely of aggregate steady-state statistics. Making the problem harder, we assume that the states we wish to predict are unobserved in the training data. Specifically, given a parametric model over the transition rates of a CTMC and some known transition rates, we wish to extrapolate its steady state distribution to states that are unobserved. A technical roadblock to learn a CTMC from its steady state has been that the chain rule to compute gradients will not work over the arbitrarily long sequences necessary to reach steady state —from where the aggregate statistics are sampled. To overcome this optimization challenge, we propose ∞-SGD, a principled stochastic gradient descent method that uses randomly-stopped estimators to avoid infinite sums required by the steady state computation, while learning even when only a subset of the CTMC states can be observed. We apply ∞-SGD to a real-world testbed and synthetic experiments showcasing its accuracy, ability to extrapolate the steady state distribution to unobserved states under unobserved conditions (heavy loads, when training under light loads), and succeeding in difficult scenarios where even a tailor-made extension of existing methods fails.




How to Cite

Gao, J., Zahran, M. A., Sheoran, A., Fahmy, S., & Ribeiro, B. (2020). Infinity Learning: Learning Markov Chains from Aggregate Steady-State Observations. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3922-3929.



AAAI Technical Track: Machine Learning