MAUGen: A Unified Diffusion Approach for Multi-Identity Facial Expression and AU Label Generation
DOI:
https://doi.org/10.1609/aaai.v40i3.37172Abstract
The lack of large-scale, demographically diverse face images with precise Action Unit (AU) occurrence and intensity annotations has long been recognized as a fundamental bottleneck in developing generalizable facial AU recognition systems. In this paper, we propose MAUGen, a diffusion-based multi-modal framework that jointly generates a large collection of photorealistic facial expressions and anatomically consistent AU labels, including both occurrence and intensity, conditioned on a single descriptive text prompt. Our MAUGen involves two key modules: (1) a Multi-modal Representation Learning (MRL) module that captures the relationships among the paired facial textual description, facial identity, facial expression image, and AU activations within a unified latent space; and (2) a Diffusion-based Image-label Generator (DIG) that decodes the obtained joint representation into aligned facial image-label pairs across diverse identities. Under this framework, we introduce the Multi-Identity Facial Action (MIFA), a large-scale multi-modal (i.e., text descriptions, face images with labels) synthetic dataset that features comprehensive AU annotations and identity variations. Extensive experiments demonstrate that MAUGen outperforms existing methods in synthesizing photorealistic, demographically diverse facial images, along with semantically aligned AU labels.Downloads
Published
2026-03-14
How to Cite
Li, X., Lou, Y., Gao, A., Zhang, W., & Song, S. (2026). MAUGen: A Unified Diffusion Approach for Multi-Identity Facial Expression and AU Label Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(3), 1918–1927. https://doi.org/10.1609/aaai.v40i3.37172
Issue
Section
AAAI Technical Track on Cognitive Modeling & Cognitive Systems