AV-Edit: Multimodal Generative Sound Effect Editing via Audio-Visual Semantic Joint Control
DOI:
https://doi.org/10.1609/aaai.v40i26.39298Abstract
Sound effect editing—modifying audio by adding, removing, or replacing elements—remains constrained by existing approaches that rely solely on low-level signal processing or coarse text prompts, often resulting in limited flexibility and suboptimal audio quality. To address this, we propose AV-Edit, a generative sound effect editing framework that enables fine-grained editing of existing audio tracks in videos by jointly leveraging visual, audio, and text semantics. Specifically, the proposed method employs a specially designed contrastive audio-visual masking autoencoder (CAV-MAE-Edit) for multimodal pre-training, learning aligned cross-modal representations. These representations are then used to train an editorial Multimodal Diffusion Transformer (MM-DiT) capable of removing visually irrelevant sounds and generating missing audio elements consistent with video content through a correlation-based feature gating training strategy. Furthermore, we construct a dedicated video-based sound editing dataset as an evaluation benchmark. Experiments demonstrate that the proposed AV-Edit generates high-quality audio with precise modifications based on visual content, achieving state-of-the-art performance in the field of sound effect editing and exhibiting strong competitiveness in the domain of audio generation.Published
2026-03-14
How to Cite
Guo, X., Yang, X., Zhang, L., Yang, J., Wang, Z., & Luan, J. (2026). AV-Edit: Multimodal Generative Sound Effect Editing via Audio-Visual Semantic Joint Control. Proceedings of the AAAI Conference on Artificial Intelligence, 40(26), 21504–21512. https://doi.org/10.1609/aaai.v40i26.39298
Issue
Section
AAAI Technical Track on Machine Learning III