DVANet: Disentangling View and Action Features for Multi-View Action Recognition

Authors

  • Nyle Siddiqui Center for Research in Computer Vision, University of Central Florida
  • Praveen Tirupattur Center for Research in Computer Vision, University of Central Florida
  • Mubarak Shah Center for Research in Computer Vision, University of Central Florida

DOI:

https://doi.org/10.1609/aaai.v38i5.28290

Keywords:

CV: Video Understanding & Activity Analysis, CV: Representation Learning for Vision, CV: Applications

Abstract

In this work, we present a novel approach to multi-view action recognition where we guide learned action representations to be separated from view-relevant information in a video. When trying to classify action instances captured from multiple viewpoints, there is a higher degree of difficulty due to the difference in background, occlusion, and visibility of the captured action from different camera angles. To tackle the various problems introduced in multi-view action recognition, we propose a novel configuration of learnable transformer decoder queries, in conjunction with two supervised contrastive losses, to enforce the learning of action features that are robust to shifts in viewpoints. Our disentangled feature learning occurs in two stages: the transformer decoder uses separate queries to separately learn action and view information, which are then further disentangled using our two contrastive losses. We show that our model and method of training significantly outperforms all other uni-modal models on four multi-view action recognition datasets: NTU RGB+D, NTU RGB+D 120, PKU-MMD, and N-UCLA. Compared to previous RGB works, we see maximal improvements of 1.5%, 4.8%, 2.2%, and 4.8% on each dataset, respectively. Our code can be found here: https://github.com/NyleSiddiqui/MultiView_Actions

Published

2024-03-24

How to Cite

Siddiqui, N., Tirupattur, P., & Shah, M. (2024). DVANet: Disentangling View and Action Features for Multi-View Action Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4873-4881. https://doi.org/10.1609/aaai.v38i5.28290

Issue

Section

AAAI Technical Track on Computer Vision IV