V2Meow: Meowing to the Visual Beat via Video-to-Music Generation

Authors

  • Kun Su University of Washington
  • Judith Yue Li Google Research
  • Qingqing Huang ByteDance
  • Dima Kuzmin Google Research
  • Joonseok Lee Google Research Seoul National University
  • Chris Donahue Google DeepMind Carnegie Mellon University
  • Fei Sha Google Research
  • Aren Jansen Google Research
  • Yu Wang New York University
  • Mauro Verzetti Google DeepMind
  • Timo Denk Google DeepMind

DOI:

https://doi.org/10.1609/aaai.v38i5.28299

Keywords:

CV: Applications, CV: Multi-modal Vision, ML: Deep Generative Models & Autoencoders, NLP: (Large) Language Models

Abstract

Video-to-music generation demands both a temporally localized high-quality listening experience and globally aligned video-acoustic signatures. While recent music generation models excel at the former through advanced audio codecs, the exploration of video-acoustic signatures has been confined to specific visual scenarios. In contrast, our research confronts the challenge of learning globally aligned signatures between video and music directly from paired music and videos, without explicitly modeling domain-specific rhythmic or semantic relationships. We propose V2Meow, a video-to-music generation system capable of producing high-quality music audio for a diverse range of video input types using a multi-stage autoregressive model. Trained on 5k hours of music audio clips paired with video frames mined from in-the-wild music videos, V2Meow is competitive with previous domain-specific models when evaluated in a zero-shot manner. It synthesizes high-fidelity music audio waveforms solely by conditioning on pre-trained general-purpose visual features extracted from video frames, with optional style control via text prompts. Through both qualitative and quantitative evaluations, we demonstrate that our model outperforms various existing music generation systems in terms of visual-audio correspondence and audio quality. Music samples are available at tinyurl.com/v2meow.

Published

2024-03-24

How to Cite

Su, K., Li, J. Y., Huang, Q., Kuzmin, D., Lee, J., Donahue, C., Sha, F., Jansen, A., Wang, Y., Verzetti, M., & Denk, T. (2024). V2Meow: Meowing to the Visual Beat via Video-to-Music Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4952-4960. https://doi.org/10.1609/aaai.v38i5.28299

Issue

Section

AAAI Technical Track on Computer Vision IV