Adaptive Image-to-Video Scene Graph Generation via Knowledge Reasoning and Adversarial Learning


  • Jin Chen Beijing Institute of Technology
  • Xiaofeng Ji Beijing Institute of Technology
  • Xinxiao Wu Beijing Institute of Technology



Computer Vision (CV)


Scene graph in a video conveys a wealth of information about objects and their relationships in the scene, thus benefiting many downstream tasks such as video captioning and visual question answering. Existing methods of scene graph generation require large-scale training videos annotated with objects and relationships in each frame to learn a powerful model. However, such comprehensive annotation is time-consuming and labor-intensive. On the other hand, it is much easier and less cost to annotate images with scene graphs, so we investigate leveraging annotated images to facilitate training a scene graph generation model for unannotated videos, namely image-to-video scene graph generation. This task presents two challenges: 1) infer unseen dynamic relationships in videos from static relationships in images due to the absence of motion information in images; 2) adapt objects and static relationships from images to video frames due to the domain shift between them. To address the first challenge, we exploit external commonsense knowledge to infer the unseen dynamic relationship from the temporal evolution of static relationships. We tackle the second challenge by hierarchical adversarial learning to reduce the data distribution discrepancy between images and video frames. Extensive experiment results on two benchmark video datasets demonstrate the effectiveness of our method.




How to Cite

Chen, J., Ji, X., & Wu, X. (2022). Adaptive Image-to-Video Scene Graph Generation via Knowledge Reasoning and Adversarial Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 276-284.



AAAI Technical Track on Computer Vision I