UniAlignment: Semantic Alignment for Unified Image Generation, Understanding, Manipulation and Perception

Authors

  • Xinyang Song School of Artificial Intelligence, University of Chinese Academy of Sciences Institute of Automation, Chinese Academy of Sciences
  • Libin Wang Ant Group
  • Weining Wang Institute of Automation, Chinese Academy of Sciences
  • Shaozhen Liu School of Artificial Intelligence, University of Chinese Academy of Sciences Institute of Automation, Chinese Academy of Sciences
  • Dandan Zheng Ant Group
  • Jingdong Chen Ant Group
  • Qi Li School of Artificial Intelligence, University of Chinese Academy of Sciences Institute of Automation, Chinese Academy of Sciences
  • Zhenan Sun School of Artificial Intelligence, University of Chinese Academy of Sciences Institute of Automation, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v40i11.37868

Abstract

The remarkable success of diffusion models in text-to-image generation has sparked growing interest in expanding their capabilities to a variety of multi-modal tasks, including image understanding, manipulation, and perception. These tasks require advanced semantic comprehension across both visual and textual modalities, especially in scenarios involving complex semantic instructions. However, existing approaches often rely heavily on vision-language models (VLMs) or modular designs for semantic guidance, leading to fragmented architectures and computational inefficiency. To address these challenges, we propose UniAlignment, a unified multimodal generation framework within a single diffusion transformer. UniAlignment introduces a dual-stream diffusion training strategy that incorporates both intrinsic-modal semantic alignment and cross-modal semantic alignment, thereby enhancing the model's cross-modal consistency and instruction-following robustness. Additionally, we present SemGen-Bench, a new benchmark specifically designed to evaluate multimodal semantic consistency under complex textual instructions. Extensive experiments across multiple tasks and benchmarks demonstrate that UniAlignment outperforms existing baselines, underscoring the significant potential of diffusion models in unified multimodal generation.

Downloads

Published

2026-03-14

How to Cite

Song, X., Wang, L., Wang, W., Liu, S., Zheng, D., Chen, J., … Sun, Z. (2026). UniAlignment: Semantic Alignment for Unified Image Generation, Understanding, Manipulation and Perception. Proceedings of the AAAI Conference on Artificial Intelligence, 40(11), 9116–9126. https://doi.org/10.1609/aaai.v40i11.37868

Issue

Section

AAAI Technical Track on Computer Vision VIII