TY - JOUR AU - Liu, Daizong AU - Qu, Xiaoye AU - Zhou, Pan AU - Liu, Yang PY - 2022/06/28 Y2 - 2024/03/29 TI - Exploring Motion and Appearance Information for Temporal Sentence Grounding JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 2 SE - AAAI Technical Track on Computer Vision II DO - 10.1609/aaai.v36i2.20059 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20059 SP - 1674-1682 AB - This paper addresses temporal sentence grounding. Previous works typically solve this task by learning frame-level video features and align them with the textual information. A major limitation of these works is that they fail to distinguish ambiguous video frames with subtle appearance differences due to frame-level feature extraction. Recently, a few methods adopt Faster R-CNN to extract detailed object features in each frame to differentiate the fine-grained appearance similarities. However, the object-level features extracted by Faster R-CNN suffer from missing motion analysis since the object detection model lacks temporal modeling. To solve this issue, we propose a novel Motion-Appearance Reasoning Network (MARN), which incorporates both motion-aware and appearance-aware object features to better reason object relations for modeling the activity among successive frames.Specifically, we first introduce two individual video encoders to embed the video into corresponding motion-oriented and appearance-aspect object representations. Then, we develop separate motion and appearance branches to learn motion-guided and appearance-guided object relations, respectively. At last, both motion and appearance information from two branches are associated to generate more representative features for final grounding. Extensive experiments on two challenging datasets (Charades-STA and TACoS) show that our proposed MARN significantly outperforms previous state-of-the-art methods by a large margin. ER -