End-to-End Zero-Shot HOI Detection via Vision and Language Knowledge Distillation
Keywords:CV: Scene Analysis & Understanding, CV: Language and Vision
AbstractMost existing Human-Object Interaction (HOI) Detection methods rely heavily on full annotations with predefined HOI categories, which is limited in diversity and costly to scale further. We aim at advancing zero-shot HOI detection to detect both seen and unseen HOIs simultaneously. The fundamental challenges are to discover potential human-object pairs and identify novel HOI categories. To overcome the above challenges, we propose a novel End-to-end zero-shot HOI Detection (EoID) framework via vision-language knowledge distillation. We first design an Interactive Score module combined with a Two-stage Bipartite Matching algorithm to achieve interaction distinguishment for human-object pairs in an action-agnostic manner. Then we transfer the distribution of action probability from the pretrained vision-language teacher as well as the seen ground truth to the HOI model to attain zero-shot HOI classification. Extensive experiments on HICO-Det dataset demonstrate that our model discovers potential interactive pairs and enables the recognition of unseen HOIs. Finally, our method outperforms the previous SOTA under various zero-shot settings. Moreover, our method is generalizable to large-scale object detection data to further scale up the action sets. The source code is available at: https://github.com/mrwu-mac/EoID.
How to Cite
Wu, M., Gu, J., Shen, Y., Lin, M., Chen, C., & Sun, X. (2023). End-to-End Zero-Shot HOI Detection via Vision and Language Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 2839-2846. https://doi.org/10.1609/aaai.v37i3.25385
AAAI Technical Track on Computer Vision III