A Layer-Based Sequential Framework for Scene Generation with GANs


  • Mehmet Ozgur Turkoglu University of Twente
  • William Thong University of Amsterdam
  • Luuk Spreeuwers University of Twente
  • Berkay Kicanaoglu University of Amsterdam




The visual world we sense, interpret and interact everyday is a complex composition of interleaved physical entities. Therefore, it is a very challenging task to generate vivid scenes of similar complexity using computers. In this work, we present a scene generation framework based on Generative Adversarial Networks (GANs) to sequentially compose a scene, breaking down the underlying problem into smaller ones. Different than the existing approaches, our framework offers an explicit control over the elements of a scene through separate background and foreground generators. Starting with an initially generated background, foreground objects then populate the scene one-by-one in a sequential manner. Via quantitative and qualitative experiments on a subset of the MS-COCO dataset, we show that our proposed framework produces not only more diverse images but also copes better with affine transformations and occlusion artifacts of foreground objects than its counterparts.




How to Cite

Turkoglu, M. O., Thong, W., Spreeuwers, L., & Kicanaoglu, B. (2019). A Layer-Based Sequential Framework for Scene Generation with GANs. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8901-8908. https://doi.org/10.1609/aaai.v33i01.33018901



AAAI Technical Track: Vision