Panoptic Scene Graph Generation with Semantics-Prototype Learning
DOI:
https://doi.org/10.1609/aaai.v38i4.28098Keywords:
CV: Multi-modal VisionAbstract
Panoptic Scene Graph Generation (PSG) parses objects and predicts their relationships (predicate) to connect human language and visual scenes. However, different language preferences of annotators and semantic overlaps between predicates lead to biased predicate annotations in the dataset, i.e. different predicates for the same object pairs. Biased predicate annotations make PSG models struggle in constructing a clear decision plane among predicates, which greatly hinders the real application of PSG models. To address the intrinsic bias above, we propose a novel framework named ADTrans to adaptively transfer biased predicate annotations to informative and unified ones. To promise consistency and accuracy during the transfer process, we propose to observe the invariance degree of representations in each predicate class, and learn unbiased prototypes of predicates with different intensities. Meanwhile, we continuously measure the distribution changes between each presentation and its prototype, and constantly screen potentially biased data. Finally, with the unbiased predicate-prototype representation embedding space, biased annotations are easily identified. Experiments show that ADTrans significantly improves the performance of benchmark models, achieving a new state-of-the-art performance, and shows great generalization and effectiveness on multiple datasets. Our code is released at https://github.com/lili0415/PSG-biased-annotation.Downloads
Published
2024-03-24
How to Cite
Li, L., Ji, W., Wu, Y., Li, M., Qin, Y., Wei, L., & Zimmermann, R. (2024). Panoptic Scene Graph Generation with Semantics-Prototype Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3145-3153. https://doi.org/10.1609/aaai.v38i4.28098
Issue
Section
AAAI Technical Track on Computer Vision III