Right Looks, Wrong Reasons: Compositional Fidelity in Text-to-Image Generation

Authors

  • Mayank Vatsa IIT Jodhpur
  • Aparna Bharati Lehigh University
  • Richa Singh IIT Jodhpur

DOI:

https://doi.org/10.1609/aaai.v40i46.41337

Abstract

The architectural blueprint of today’s leading text-to-image models contains a fundamental flaw: an inability to handle logical composition. This survey investigates this breakdown across three core primitives—negation, counting, and spatial relations. Our analysis reveals a dramatic performance collapse: models that are accurate on single primitives fail precipitously when these are combined, exposing severe interference. We trace this failure to three key factors. First, training data show a near-total absence of explicit negations. Second, continuous attention architectures are fundamentally unsuitable for discrete logic. Third, evaluation metrics reward visual plausibility over constraint satisfaction. By analyzing recent benchmarks and methods, we show that current solutions and simple scaling cannot bridge this gap. Achieving genuine compositionality, we conclude, will require fundamental advances in representation and reasoning rather than incremental adjustments to existing architectures.

Published

2026-03-14

How to Cite

Vatsa, M., Bharati, A., & Singh, R. (2026). Right Looks, Wrong Reasons: Compositional Fidelity in Text-to-Image Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(46), 39797–39805. https://doi.org/10.1609/aaai.v40i46.41337