TD²-Net: Toward Denoising and Debiasing for Video Scene Graph Generation
DOI:
https://doi.org/10.1609/aaai.v38i4.28137Keywords:
CV: Video Understanding & Activity Analysis, CV: Scene Analysis & UnderstandingAbstract
Dynamic scene graph generation (SGG) focuses on detecting objects in a video and determining their pairwise relationships. Existing dynamic SGG methods usually suffer from several issues, including 1) Contextual noise, as some frames might contain occluded and blurred objects. 2) Label bias, primarily due to the high imbalance between a few positive relationship samples and numerous negative ones. Additionally, the distribution of relationships exhibits a long-tailed pattern. To address the above problems, in this paper, we introduce a network named TD2-Net that aims at denoising and debiasing for dynamic SGG. Specifically, we first propose a denoising spatio-temporal transformer module that enhances object representation with robust contextual information. This is achieved by designing a differentiable Top-K object selector that utilizes the gumbel-softmax sampling strategy to select the relevant neighborhood for each object. Second, we introduce an asymmetrical reweighting loss to relieve the issue of label bias. This loss function integrates asymmetry focusing factors and the volume of samples to adjust the weights assigned to individual samples. Systematic experimental results demonstrate the superiority of our proposed TD2-Net over existing state-of-the-art approaches on Action Genome databases. In more detail, TD2-Net outperforms the second-best competitors by 12.7% on mean-Recall@10 for predicate classification.Downloads
Published
2024-03-24
How to Cite
Lin, X., Shi, C., Zhan, Y., Yang, Z., Wu, Y., & Tao, D. (2024). TD²-Net: Toward Denoising and Debiasing for Video Scene Graph Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3495-3503. https://doi.org/10.1609/aaai.v38i4.28137
Issue
Section
AAAI Technical Track on Computer Vision III