TY - JOUR AU - Yuan, Hangjie AU - Wang, Mang AU - Ni, Dong AU - Xu, Liangpeng PY - 2022/06/28 Y2 - 2024/03/28 TI - Detecting Human-Object Interactions with Object-Guided Cross-Modal Calibrated Semantics JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 3 SE - AAAI Technical Track on Computer Vision III DO - 10.1609/aaai.v36i3.20229 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20229 SP - 3206-3214 AB - Human-Object Interaction (HOI) detection is an essential task to understand human-centric images from a fine-grained perspective. Although end-to-end HOI detection models thrive, their paradigm of parallel human/object detection and verb class prediction loses two-stage methods' merit: object-guided hierarchy. The object in one HOI triplet gives direct clues to the verb to be predicted. In this paper, we aim to boost end-to-end models with object-guided statistical priors. Specifically, We propose to utilize a Verb Semantic Model (VSM) and use semantic aggregation to profit from this object-guided hierarchy. Similarity KL (SKL) loss is proposed to optimize VSM to align with the HOI dataset's priors. To overcome the static semantic embedding problem, we propose to generate cross-modality-aware visual and semantic features by Cross-Modal Calibration (CMC). The above modules combined composes Object-guided Cross-modal Calibration Network (OCN). Experiments conducted on two popular HOI detection benchmarks demonstrate the significance of incorporating the statistical prior knowledge and produce state-of-the-art performances. More detailed analysis indicates proposed modules serve as a stronger verb predictor and a more superior method of utilizing prior knowledge. The codes are available at https://github.com/JacobYuan7/OCN-HOI-Benchmark. ER -