Multiscale Generative Models: Improving Performance of a Generative Model Using Feedback from Other Dependent Generative Models

Authors

  • Changyu Chen Singapore Management University
  • Avinandan Bose Indian Institute of Technology Kanpur
  • Shih-Fen Cheng Singapore Management University
  • Arunesh Sinha Singapore Management University

DOI:

https://doi.org/10.1609/aaai.v36i6.20568

Keywords:

Machine Learning (ML), Multiagent Systems (MAS)

Abstract

Realistic fine-grained multi-agent simulation of real-world complex systems is crucial for many downstream tasks such as reinforcement learning. Recent work has used generative models (GANs in particular) for providing high-fidelity simulation of real-world systems. However, such generative models are often monolithic and miss out on modeling the interaction in multi-agent systems. In this work, we take a first step towards building multiple interacting generative models (GANs) that reflects the interaction in real world. We build and analyze a hierarchical set-up where a higher-level GAN is conditioned on the output of multiple lower-level GANs. We present a technique of using feedback from the higher-level GAN to improve performance of lower-level GANs. We mathematically characterize the conditions under which our technique is impactful, including understanding the transfer learning nature of our set-up. We present three distinct experiments on synthetic data, time series data, and image domain, revealing the wide applicability of our technique.

Downloads

Published

2022-06-28

How to Cite

Chen, C., Bose, A., Cheng, S.-F., & Sinha, A. (2022). Multiscale Generative Models: Improving Performance of a Generative Model Using Feedback from Other Dependent Generative Models. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6193-6201. https://doi.org/10.1609/aaai.v36i6.20568

Issue

Section

AAAI Technical Track on Machine Learning I