Multi-Entity Dependence Learning With Rich Context via Conditional Variational Auto-Encoder


  • Luming Tang Tsinghua University
  • Yexiang Xue Cornell University
  • Di Chen Cornell University
  • Carla Gomes Cornell University



Multi-Entity Dependence Learning (MEDL) explores conditional correlations among multiple entities. The availability of rich contextual information requires a nimble learning scheme that tightly integrates with deep neural networks and has the ability to capture correlation structures among exponentially many outcomes. We propose MEDL_CVAE, which encodes a conditional multivariate distribution as a generating process. As a result, the variational lower bound of the joint likelihood can be optimized via a conditional variational auto-encoder and trained end-to-end on GPUs. Our MEDL_CVAE was motivated by two real-world applications in computational sustainability: one studies the spatial correlation among multiple bird species using the eBird data and the other models multi-dimensional landscape composition and human footprint in the Amazon rainforest with satellite images. We show that MEDL_CVAE captures rich dependency structures, scales better than previous methods, and further improves on the joint likelihood taking advantage of very large datasets that are beyond the capacity of previous methods.




How to Cite

Tang, L., Xue, Y., Chen, D., & Gomes, C. (2018). Multi-Entity Dependence Learning With Rich Context via Conditional Variational Auto-Encoder. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).



Computational Sustainability and Artificial Intelligence