Prompting Segmentation with Sound Is Generalizable Audio-Visual Source Localizer
DOI:
https://doi.org/10.1609/aaai.v38i6.28378Keywords:
CV: Multi-modal Vision, CV: Segmentation, ROB: Multimodal Perception & Sensor Fusion, ML: Multimodal LearningAbstract
Never having seen an object and heard its sound simultaneously, can the model still accurately localize its visual position from the input audio? In this work, we concentrate on the Audio-Visual Localization and Segmentation tasks but under the demanding zero-shot and few-shot scenarios. To achieve this goal, different from existing approaches that mostly employ the encoder-fusion-decoder paradigm to decode localization information from the fused audio-visual feature, we introduce the encoder-prompt-decoder paradigm, aiming to better fit the data scarcity and varying data distribution dilemmas with the help of abundant knowledge from pre-trained models. Specifically, we first propose to construct a Semantic-aware Audio Prompt (SAP) to help the visual foundation model focus on sounding objects, meanwhile, the semantic gap between the visual and audio modalities is also encouraged to shrink. Then, we develop a Correlation Adapter (ColA) to keep minimal training efforts as well as maintain adequate knowledge of the visual foundation model. By equipping with these means, extensive experiments demonstrate that this new paradigm outperforms other fusion-based methods in both the unseen class and cross-dataset settings. We hope that our work can further promote the generalization study of Audio-Visual Localization and Segmentation in practical application scenarios. Project page: https://github.com/GeWu-Lab/Generalizable-Audio-Visual-SegmentationDownloads
Published
2024-03-24
How to Cite
Wang, Y., Liu, W., Li, G., Ding, J., Hu, D., & Li, X. (2024). Prompting Segmentation with Sound Is Generalizable Audio-Visual Source Localizer. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5669-5677. https://doi.org/10.1609/aaai.v38i6.28378
Issue
Section
AAAI Technical Track on Computer Vision V