AEDR: Training-Free AI-Generated Image Attribution via Autoencoder Double-Reconstruction
DOI:
https://doi.org/10.1609/aaai.v40i12.37930Abstract
The rapid advancement of image-generation technologies has made it possible for anyone to create photorealistic images using generative models, raising significant security concerns. To mitigate malicious use, tracing the origin of such images is essential. Reconstruction-based attribution methods offer a promising solution, but they often suffer from reduced accuracy and high computational costs when applied to state‑of‑the‑art (SOTA) models. To address these challenges, we propose AEDR (AutoEncoder Double-Reconstruction), a novel training‑free attribution method designed for generative models with continuous autoencoders. Unlike existing reconstruction‑based approaches that rely on the value of a single reconstruction loss, AEDR performs two consecutive reconstructions using the model’s autoencoder, and adopts the ratio of these two reconstruction losses as the attribution signal. This signal is further calibrated using the image homogeneity metric to improve accuracy, which inherently cancels out absolute biases caused by image complexity, with autoencoder‑based reconstruction ensuring superior computational efficiency. Experiments on eight top latent diffusion models show that AEDR achieves 25.5% higher attribution accuracy than existing reconstruction‑based methods, with requiring only 1% of the computational time.Downloads
Published
2026-03-14
How to Cite
Wang, C., Yang, Z., Wang, Y., Zhang, W., & Chen, K. (2026). AEDR: Training-Free AI-Generated Image Attribution via Autoencoder Double-Reconstruction. Proceedings of the AAAI Conference on Artificial Intelligence, 40(12), 9675–9683. https://doi.org/10.1609/aaai.v40i12.37930
Issue
Section
AAAI Technical Track on Computer Vision IX