MCA: Moment Channel Attention Networks

Authors

  • Yangbo Jiang Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, Zhejiang, China College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
  • Zhiwei Jiang Guangzhou Electronic Technology Co., Ltd., Chinese Academy of Sciences, GuangZhou, China
  • Le Han Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, Zhejiang, China College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
  • Zenan Huang Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, Zhejiang, China College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
  • Nenggan Zheng Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, Zhejiang, China College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, Zhejiang, China CCAI by MOE and Zhejiang Provincial Government(ZJU), Hangzhou, Zhejiang, China

DOI:

https://doi.org/10.1609/aaai.v38i3.28035

Keywords:

CV: Object Detection & Categorization

Abstract

Channel attention mechanisms endeavor to recalibrate channel weights to enhance representation abilities of networks. However, mainstream methods often rely solely on global average pooling as the feature squeezer, which significantly limits the overall potential of models. In this paper, we investigate the statistical moments of feature maps within a neural network. Our findings highlight the critical role of high-order moments in enhancing model capacity. Consequently, we introduce a flexible and comprehensive mechanism termed Extensive Moment Aggregation (EMA) to capture the global spatial context. Building upon this mechanism, we propose the Moment Channel Attention (MCA) framework, which efficiently incorporates multiple levels of moment-based information while minimizing additional computation costs through our Cross Moment Convolution (CMC) module. The CMC module via channel-wise convolution layer to capture multiple order moment information as well as cross channel features. The MCA block is designed to be lightweight and easily integrated into a variety of neural network architectures. Experimental results on classical image classification, object detection, and instance segmentation tasks demonstrate that our proposed method achieves state-of-the-art results, outperforming existing channel attention methods.

Published

2024-03-24

How to Cite

Jiang, Y., Jiang, Z., Han, L., Huang, Z., & Zheng, N. (2024). MCA: Moment Channel Attention Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2579–2588. https://doi.org/10.1609/aaai.v38i3.28035

Issue

Section

AAAI Technical Track on Computer Vision II