PixelMan: Consistent Object Editing with Diffusion Models via Pixel Manipulation and Generation

Authors

  • Liyao Jiang Department of Electrical and Computer Engineering, University of Alberta Huawei Technologies Canada
  • Negar Hassanpour Huawei Technologies Canada
  • Mohammad Salameh Huawei Technologies Canada
  • Mohammadreza Samadi Huawei Technologies Canada
  • Jiao He Huawei Kirin Solution, China
  • Fengyu Sun Huawei Kirin Solution, China
  • Di Niu Department of Electrical and Computer Engineering, University of Alberta

DOI:

https://doi.org/10.1609/aaai.v39i4.32420

Abstract

Recent research explores the potential of Diffusion Models (DMs) for consistent object editing, which aims to modify object position, size, and composition, etc., while preserving the consistency of objects and background without changing their texture and attributes. Current inference-time methods often rely on DDIM inversion, which inherently compromises efficiency and the achievable consistency of edited images. Recent methods also utilize energy guidance which iteratively updates the predicted noise and can drive the latents away from the original image, resulting in distortions. In this paper, we propose PixelMan, an inversion-free and training-free method for achieving consistent object editing via Pixel Manipulation and generation, where we directly create a duplicate copy of the source object at target location in the pixel space, and introduce an efficient sampling approach to iteratively harmonize the manipulated object into the target location and inpaint its original location, while ensuring image consistency by anchoring the edited image to be generated to the pixel-manipulated image as well as by introducing various consistency-preserving optimization techniques during inference. Experimental evaluations based on benchmark datasets as well as extensive visual comparisons show that in as few as 16 inference steps, PixelMan outperforms a range of state-of-the-art training-based and training-free methods (usually requiring 50 steps) on multiple consistent object editing tasks.

Published

2025-04-11

How to Cite

Jiang, L., Hassanpour, N., Salameh, M., Samadi, M., He, J., Sun, F., & Niu, D. (2025). PixelMan: Consistent Object Editing with Diffusion Models via Pixel Manipulation and Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(4), 4012-4020. https://doi.org/10.1609/aaai.v39i4.32420

Issue

Section

AAAI Technical Track on Computer Vision III