Visual Concept Reasoning Networks

Authors

  • Taesup Kim Mila, Université de Montréal Kakao Brain
  • Sungwoong Kim Kakao Brain
  • Yoshua Bengio Mila, Université de Montréal

Keywords:

(Deep) Neural Network Algorithms, General

Abstract

A split-transform-merge strategy has been broadly used as an architectural constraint in convolutional neural networks for visual recognition tasks. It approximates sparsely connected networks by explicitly defining multiple branches to simultaneously learn representations with different visual concepts or properties. Dependencies or interactions between these representations are typically defined by dense and local operations, however, without any adaptiveness or high-level reasoning. In this work, we propose to exploit this strategy and combine it with our Visual Concept Reasoning Networks (VCRNet) to enable reasoning between high-level visual concepts. We associate each branch with a visual concept and derive a compact concept state by selecting a few local descriptors through an attention module. These concept states are then updated by graph-based interaction and used to adaptively modulate the local descriptors. We describe our proposed model by split-transform-attend-interact-modulate-merge stages, which are implemented by opting for a highly modularized architecture. Extensive experiments on visual recognition tasks such as image classification, semantic segmentation, object detection, scene recognition, and action recognition show that our proposed model, VCRNet, consistently improves the performance by increasing the number of parameters by less than 1%.

Downloads

Published

2021-05-18

How to Cite

Kim, T., Kim, S., & Bengio, Y. (2021). Visual Concept Reasoning Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8172-8180. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16995

Issue

Section

AAAI Technical Track on Machine Learning II