JoVALE: Detecting Human Actions in Video Using Audiovisual and Language Contexts

Authors

  • Taein Son Hanyang University
  • Soo Won Seo Seoul National University
  • Jisong Kim Hanyang University
  • Seok Hwan Lee Hanyang University
  • Jun Won Choi Seoul National University

DOI:

https://doi.org/10.1609/aaai.v39i7.32745

Abstract

Video Action Detection (VAD) entails localizing and categorizing action instances within videos, which inherently consist of diverse information sources such as audio, visual cues, and surrounding scene contexts. Leveraging this multi-modal information effectively for VAD poses a significant challenge, as the model must identify action-relevant cues with precision. In this study, we introduce a novel multi-modal VAD architecture, referred to as the Joint Actor-centric Visual, Audio, Language Encoder (JoVALE). JoVALE is the first VAD method to integrate audio and visual features with scene descriptive context sourced from large-capacity image captioning models. At the heart of JoVALE is the actor-centric aggregation of audio, visual, and scene descriptive information, enabling adaptive integration of crucial features for recognizing each actor's actions. We have developed a Transformer-based architecture, the Actor-centric Multi-modal Fusion Network, specifically designed to capture the dynamic interactions among actors and their multi-modal contexts. Our evaluation on three prominent VAD benchmarks—AVA, UCF101-24, and JHMDB51-21—demonstrates that incorporating multi-modal information significantly enhances performance, setting new state-of-the-art performances in the field.

Published

2025-04-11

How to Cite

Son, T., Seo, S. W., Kim, J., Lee, S. H., & Choi, J. W. (2025). JoVALE: Detecting Human Actions in Video Using Audiovisual and Language Contexts. Proceedings of the AAAI Conference on Artificial Intelligence, 39(7), 6940–6949. https://doi.org/10.1609/aaai.v39i7.32745

Issue

Section

AAAI Technical Track on Computer Vision VI