Linking People across Text and Images Based on Social Relation Reasoning
DOI:
https://doi.org/10.1609/aaai.v37i1.25209Keywords:
CV: Language and VisionAbstract
As a sub-task of visual grounding, linking people across text and images aims to localize target people in images with corresponding sentences. Existing approaches tend to capture superficial features of people (e.g., dress and location) that suffer from the incompleteness information across text and images. We observe that humans are adept at exploring social relations to assist identifying people. Therefore, we propose a Social Relation Reasoning (SRR) model to address the aforementioned issues. Firstly, we design a Social Relation Extraction (SRE) module to extract social relations between people in the input sentence. Specially, the SRE module based on zero-shot learning is able to extract social relations even though they are not defined in the existing datasets. A Reasoning based Cross-modal Matching (RCM) module is further used to generate matching matrices by reasoning on the social relations and visual features. Experimental results show that the accuracy of our proposed SRR model outperforms the state-of-the-art models on the challenging datasets Who's Waldo and FL: MSRE, by more than 5\% and 7\%, respectively. Our source code is available at https://github.com/VILAN-Lab/SRR.Downloads
Published
2023-06-26
How to Cite
Lei, Y., Zhao, P., Li, P., Cai, Y., & Huang, Q. (2023). Linking People across Text and Images Based on Social Relation Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 1260-1268. https://doi.org/10.1609/aaai.v37i1.25209
Issue
Section
AAAI Technical Track on Computer Vision I