DIANet: Dense-and-Implicit Attention Network

Authors

  • Zhongzhan Huang Tsinghua University
  • Senwei Liang Purdue University
  • Mingfu Liang Northwestern University
  • Haizhao Yang Purdue University

DOI:

https://doi.org/10.1609/aaai.v34i04.5842

Abstract

Attention networks have successfully boosted the performance in various vision problems. Previous works lay emphasis on designing a new attention module and individually plug them into the networks. Our paper proposes a novel-and-simple framework that shares an attention module throughout different network layers to encourage the integration of layer-wise information and this parameter-sharing module is referred to as Dense-and-Implicit-Attention (DIA) unit. Many choices of modules can be used in the DIA unit. Since Long Short Term Memory (LSTM) has a capacity of capturing long-distance dependency, we focus on the case when the DIA unit is the modified LSTM (called DIA-LSTM). Experiments on benchmark datasets show that the DIA-LSTM unit is capable of emphasizing layer-wise feature interrelation and leads to significant improvement of image classification accuracy. We further empirically show that the DIA-LSTM has a strong regularization ability on stabilizing the training of deep networks by the experiments with the removal of skip connections (He et al. 2016a) or Batch Normalization (Ioffe and Szegedy 2015) in the whole residual network.

Downloads

Published

2020-04-03

How to Cite

Huang, Z., Liang, S., Liang, M., & Yang, H. (2020). DIANet: Dense-and-Implicit Attention Network. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4206-4214. https://doi.org/10.1609/aaai.v34i04.5842

Issue

Section

AAAI Technical Track: Machine Learning