ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image Diffusion Models
DOI:
https://doi.org/10.1609/aaai.v38i13.29371Keywords:
ML: Evaluation and Analysis, CV: Visual Reasoning & Symbolic RepresentationsAbstract
The ability to understand visual concepts and replicate and compose these concepts from images is a central goal for computer vision. Recent advances in text-to-image (T2I) models have lead to high definition and realistic image quality generation by learning from large databases of images and their descriptions. However, the evaluation of T2I models has focused on photorealism and limited qualitative measures of visual understanding. To quantify the ability of T2I models in learning and synthesizing novel visual concepts (a.k.a. personalized T2I), we introduce ConceptBed, a large-scale dataset that consists of 284 unique visual concepts, and 33K composite text prompts. Along with the dataset, we propose an evaluation metric, Concept Confidence Deviation (CCD), that uses the confidence of oracle concept classifiers to measure the alignment between concepts generated by T2I generators and concepts contained in target images. We evaluate visual concepts that are either objects, attributes, or styles, and also evaluate four dimensions of compositionality: counting, attributes, relations, and actions. Our human study shows that CCD is highly correlated with human understanding of concepts. Our results point to a trade-off between learning the concepts and preserving the compositionality which existing approaches struggle to overcome. The data, code, and interactive demo is available at: https://conceptbed.github.io/Downloads
Published
2024-03-24
How to Cite
Patel, M., Gokhale, T., Baral, C., & Yang, Y. (2024). ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image Diffusion Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14554-14562. https://doi.org/10.1609/aaai.v38i13.29371
Issue
Section
AAAI Technical Track on Machine Learning IV