ContextFlow: Training-Free Video Object Editing via Adaptive Context Enrichment

Authors

  • Yiyang Chen State Key Laboratory of General Artifical Intelligence, Peking University, Beijing, China
  • Xuanhua He The Hong Kong University of Science and Technology
  • Xiujun Ma State Key Laboratory of General Artifical Intelligence, Peking University, Beijing, China
  • Jack Ma The Hong Kong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v40i4.37306

Abstract

Training-free video object editing aims to achieve precise object-level manipulation, including object insertion, swapping, and deletion. However, it faces significant challenges in maintaining fidelity and temporal consistency. Existing methods, often designed for U-Net architectures, suffer from two primary limitations: inaccurate inversion due to first-order solvers, and contextual conflicts caused by crude "hard" feature replacement. These issues are more challenging in Diffusion Transformers (DiTs), where the unsuitability of prior layer-selection heuristics makes effective guidance challenging. To address these limitations, we introduce ContextFlow, a novel training-free framework for DiT-based video object editing. In detail, we first employ a high-order Rectified Flow solver to establish a robust editing foundation. The core of our framework is Adaptive Context Enrichment (for specifying what to edit), a mechanism that addresses contextual conflicts. Instead of replacing features, it enriches the self-attention context by concatenating Key-Value pairs from parallel reconstruction and editing paths, empowering the model to dynamically fuse information. Additionally, to determine where to apply this enrichment (for specifying where to edit), we propose a systematic, data-driven analysis to identify task-specific vital layers. Based on a novel Guidance Responsiveness Metric, our method pinpoints the most influential DiT blocks for different tasks (e.g., insertion, swapping), enabling targeted and highly effective guidance. Extensive experiments show that ContextFlow significantly outperforms existing training-free methods and even surpasses several state-of-the-art training-based approaches, delivering temporally coherent, high-fidelity results.

Downloads

Published

2026-03-14

How to Cite

Chen, Y., He, X., Ma, X., & Ma, J. (2026). ContextFlow: Training-Free Video Object Editing via Adaptive Context Enrichment. Proceedings of the AAAI Conference on Artificial Intelligence, 40(4), 3129-3137. https://doi.org/10.1609/aaai.v40i4.37306

Issue

Section

AAAI Technical Track on Computer Vision I