Coarse-to-Fine Generative Modeling for Graphic Layouts
Keywords:Computer Vision (CV)
AbstractEven though graphic layout generation has attracted growing attention recently, it is still challenging to synthesis realistic and diverse layouts, due to the complicated element relationships and varied element arrangements. In this work, we seek to improve the performance of layout generation by incorporating the concept of regions, which consist of a smaller number of elements and appears like a simple layout, into the generation process. Specifically, we leverage Variational Autoencoder (VAE) as the overall architecture and decompose the decoding process into two stages. The first stage predicts representations for regions, and the second stage fills in the detailed position for each element within the region based on the predicted region representation. Compared to prior studies that merely abstract the layout into a list of elements and generate all the element positions in one go, our approach has at least two advantages. First, by the two-stage decoding, our approach decouples the complex layout generation task into several simple layout generation tasks, which reduces the problem difficulty. Second, the predicted regions can help the model roughly know what the graphic layout looks like and serve as global context to improve the generation of detailed element positions. Qualitative and quantitative experiments demonstrate that our approach significantly outperforms the existing methods, especially on the complex graphic layouts.
How to Cite
Jiang, Z., Sun, S., Zhu, J., Lou, J.-G., & Zhang, D. (2022). Coarse-to-Fine Generative Modeling for Graphic Layouts. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 1096-1103. https://doi.org/10.1609/aaai.v36i1.19994
AAAI Technical Track on Computer Vision I