Video Generation From Text

Authors

  • Yitong Li Duke University
  • Martin Min NEC Laboratories America
  • Dinghan Shen Duke University
  • David Carlson Duke University
  • Lawrence Carin Duke University

DOI:

https://doi.org/10.1609/aaai.v32i1.12233

Keywords:

video generation, variational autoencoder, generative adversarial network

Abstract

Generating videos from text has proven to be a significant challenge for existing generative models. We tackle this problem by training a conditional generative model to extract both static and dynamic information from text. This is manifested in a hybrid framework, employing a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN). The static features, called "gist," are used to sketch text-conditioned background color and object layout structure. Dynamic features are considered by transforming input text into an image filter. To obtain a large amount of data for training the deep-learning model, we develop a method to automatically create a matched text-video corpus from publicly available online videos. Experimental results show that the proposed framework generates plausible and diverse short-duration smooth videos, while accurately reflecting the input text information. It significantly outperforms baseline models that directly adapt text-to-image generation procedures to produce videos. Performance is evaluated both visually and by adapting the inception score used to evaluate image generation in GANs.

Downloads

Published

2018-04-27

How to Cite

Li, Y., Min, M., Shen, D., Carlson, D., & Carin, L. (2018). Video Generation From Text. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12233