Toward Open-Set Human Object Interaction Detection

Authors

  • Mingrui Wu Xiamen University
  • Yuqi Liu Xiamen University
  • Jiayi Ji Xiamen University
  • Xiaoshuai Sun Xiamen University
  • Rongrong Ji Xiamen University, China

DOI:

https://doi.org/10.1609/aaai.v38i6.28422

Keywords:

CV: Scene Analysis & Understanding, CV: Multi-modal Vision, ML: Multimodal Learning

Abstract

This work is oriented toward the task of open-set Human Object Interaction (HOI) detection. The challenge lies in identifying completely new, out-of-domain relationships, as opposed to in-domain ones which have seen improvements in zero-shot HOI detection. To address this challenge, we introduce a simple Disentangled HOI Detection (DHD) model for detecting novel relationships by integrating an open-set object detector with a Visual Language Model (VLM). We utilize a disentangled image-text contrastive learning metric for training and connect the bottom-up visual features to text embeddings through lightweight unary and pair-wise adapters. Our model can benefit from the open-set object detector and the VLM to detect novel action categories and combine actions with novel object categories. We further present the VG-HOI dataset, a comprehensive benchmark with over 17k HOI relationships for open-set scenarios. Experimental results show that our model can detect unknown action classes and combine unknown object classes. Furthermore, it can generalize to over 17k HOI classes while being trained on just 600 HOI classes.

Published

2024-03-24

How to Cite

Wu, M., Liu, Y., Ji, J., Sun, X., & Ji, R. (2024). Toward Open-Set Human Object Interaction Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6066–6073. https://doi.org/10.1609/aaai.v38i6.28422

Issue

Section

AAAI Technical Track on Computer Vision V