Multi-Entity Dependence Learning With Rich Context via Conditional Variational Auto-Encoder

Authors

  • Luming Tang Tsinghua University
  • Yexiang Xue Cornell University
  • Di Chen Cornell University
  • Carla Gomes Cornell University

Abstract

Multi-Entity Dependence Learning (MEDL) explores conditional correlations among multiple entities. The availability of rich contextual information requires a nimble learning scheme that tightly integrates with deep neural networks and has the ability to capture correlation structures among exponentially many outcomes. We propose MEDL_CVAE, which encodes a conditional multivariate distribution as a generating process. As a result, the variational lower bound of the joint likelihood can be optimized via a conditional variational auto-encoder and trained end-to-end on GPUs. Our MEDL_CVAE was motivated by two real-world applications in computational sustainability: one studies the spatial correlation among multiple bird species using the eBird data and the other models multi-dimensional landscape composition and human footprint in the Amazon rainforest with satellite images. We show that MEDL_CVAE captures rich dependency structures, scales better than previous methods, and further improves on the joint likelihood taking advantage of very large datasets that are beyond the capacity of previous methods.

Downloads

Published

2018-04-25

How to Cite

Tang, L., Xue, Y., Chen, D., & Gomes, C. (2018). Multi-Entity Dependence Learning With Rich Context via Conditional Variational Auto-Encoder. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11335

Issue

Section

Computational Sustainability and Artificial Intelligence