Human Synthesis and Scene Compositing

Authors

  • Mihai Zanfir IMAR
  • Elisabeta Oneata IMAR
  • Alin-Ionut Popa IMAR
  • Andrei Zanfir IMAR
  • Cristian Sminchisescu Lund University

DOI:

https://doi.org/10.1609/aaai.v34i07.6969

Abstract

Generating good quality and geometrically plausible synthetic images of humans with the ability to control appearance, pose and shape parameters, has become increasingly important for a variety of tasks ranging from photo editing, fashion virtual try-on, to special effects and image compression. In this paper, we propose a HUSC (HUman Synthesis and Scene Compositing) framework for the realistic synthesis of humans with different appearance, in novel poses and scenes. Central to our formulation is 3d reasoning for both people and scenes, in order to produce realistic collages, by correctly modeling perspective effects and occlusion, by taking into account scene semantics and by adequately handling relative scales. Conceptually our framework consists of three components: (1) a human image synthesis model with controllable pose and appearance, based on a parametric representation, (2) a person insertion procedure that leverages the geometry and semantics of the 3d scene, and (3) an appearance compositing process to create a seamless blending between the colors of the scene and the generated human image, and avoid visual artifacts. The performance of our framework is supported by both qualitative and quantitative results, in particular state-of-the art synthesis scores for the DeepFashion dataset.

Downloads

Published

2020-04-03

How to Cite

Zanfir, M., Oneata, E., Popa, A.-I., Zanfir, A., & Sminchisescu, C. (2020). Human Synthesis and Scene Compositing. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12749-12756. https://doi.org/10.1609/aaai.v34i07.6969

Issue

Section

AAAI Technical Track: Vision