Semi-supervised Latent Disentangled Diffusion Model for Textile Pattern Generation

Authors

  • Chenggong Hu Zhejiang University
  • Yi Wang Zhejiang University
  • Mengqi Xue Hangzhou City University
  • Haofei Zhang Zhejiang University
  • Jie Song Zhejiang University
  • Li Sun Ningbo Global Innovation Center, Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v40i6.42482

Abstract

Textile pattern generation (TPG) aims to synthesize fine-grained textile pattern images based on given clothing images. Although previous studies have not explicitly investigated TPG, existing image-to-image models appear to be natural candidates for this task. However, when applied directly, these methods often produce unfaithful results, failing to preserve fine-grained details due to feature confusion between complex textile patterns and the inherent non-rigid texture distortions in clothing images. In this paper, we propose a novel method, SLDDM-TPG, for faithful and high-fidelity TPG. Our method consists of two stages: (1) a latent disentangled network (LDN) that resolves feature confusion in clothing representations and constructs a multi-dimensional, independent clothing feature space; and (2) a semi-supervised latent diffusion model (S-LDM), which receives guidance signals from LDN and generates faithful results through semi-supervised diffusion training, combined with our designed fine-grained alignment strategy. Extensive evaluations show that SLDDM-TPG reduces FID by 4.1 and improves SSIM by up to 0.116 on our CTP-HD dataset, and also demonstrate good generalization on the VITON-HD dataset.

Downloads

Published

2026-03-14

How to Cite

Hu, C., Wang, Y., Xue, M., Zhang, H., Song, J., & Sun, L. (2026). Semi-supervised Latent Disentangled Diffusion Model for Textile Pattern Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(6), 4798–4806. https://doi.org/10.1609/aaai.v40i6.42482

Issue

Section

AAAI Technical Track on Computer Vision III