Learning Object Context for Dense Captioning


  • Xiangyang Li Chinese Academy of Sciences
  • Shuqiang Jiang Chinese Academy of Sciences
  • Jungong Han Lancaster University




Dense captioning is a challenging task which not only detects visual elements in images but also generates natural language sentences to describe them. Previous approaches do not leverage object information in images for this task. However, objects provide valuable cues to help predict the locations of caption regions as caption regions often highly overlap with objects (i.e. caption regions are usually parts of objects or combinations of them). Meanwhile, objects also provide important information for describing a target caption region as the corresponding description not only depicts its properties, but also involves its interactions with objects in the image. In this work, we propose a novel scheme with an object context encoding Long Short-Term Memory (LSTM) network to automatically learn complementary object context for each caption region, transferring knowledge from objects to caption regions. All contextual objects are arranged as a sequence and progressively fed into the context encoding module to obtain context features. Then both the learned object context features and region features are used to predict the bounding box offsets and generate the descriptions. The context learning procedure is in conjunction with the optimization of both location prediction and caption generation, thus enabling the object context encoding LSTM to capture and aggregate useful object context. Experiments on benchmark datasets demonstrate the superiority of our proposed approach over the state-of-the-art methods.




How to Cite

Li, X., Jiang, S., & Han, J. (2019). Learning Object Context for Dense Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8650-8657. https://doi.org/10.1609/aaai.v33i01.33018650



AAAI Technical Track: Vision