ExprGAN: Facial Expression Editing With Controllable Expression Intensity

Authors

  • Hui Ding University of Maryland, College Park
  • Kumar Sricharan PARC, Palo Alto
  • Rama Chellappa University of Maryland, College Park

DOI:

https://doi.org/10.1609/aaai.v32i1.12277

Keywords:

facial expression editing, generative adversarial network,

Abstract

Facial expression editing is a challenging task as it needs a high-level semantic understanding of the input face image. In conventional methods, either paired training data is required or the synthetic face’s resolution is low. Moreover,only the categories of facial expression can be changed. To address these limitations, we propose an Expression Generative Adversarial Network (ExprGAN) for photo-realistic facial expression editing with controllable expression intensity. An expression controller module is specially designed to learn an expressive and compact expression code in addition to the encoder-decoder network. This novel architecture enables the expression intensity to be continuously adjusted from low to high. We further show that our ExprGAN can be applied for other tasks, such as expression transfer, image retrieval, and data augmentation for training improved face expression recognition models. To tackle the small size of the training database, an effective incremental learning scheme is proposed. Quantitative and qualitative evaluations on the widely used Oulu-CASIA dataset demonstrate the effectiveness of ExprGAN.

Downloads

Published

2018-04-27

How to Cite

Ding, H., Sricharan, K., & Chellappa, R. (2018). ExprGAN: Facial Expression Editing With Controllable Expression Intensity. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12277