Multiscale Low-Frequency Memory Network for Improved Feature Extraction in Convolutional Neural Networks
DOI:
https://doi.org/10.1609/aaai.v38i6.28411Keywords:
CV: Applications, CV: Representation Learning for Vision, ML: Deep Learning Theory, ML: Feature Construction/ReformulationAbstract
Deep learning and Convolutional Neural Networks (CNNs) have driven major transformations in diverse research areas. However, their limitations in handling low-frequency in-formation present obstacles in certain tasks like interpreting global structures or managing smooth transition images. Despite the promising performance of transformer struc-tures in numerous tasks, their intricate optimization com-plexities highlight the persistent need for refined CNN en-hancements using limited resources. Responding to these complexities, we introduce a novel framework, the Mul-tiscale Low-Frequency Memory (MLFM) Network, with the goal to harness the full potential of CNNs while keep-ing their complexity unchanged. The MLFM efficiently preserves low-frequency information, enhancing perfor-mance in targeted computer vision tasks. Central to our MLFM is the Low-Frequency Memory Unit (LFMU), which stores various low-frequency data and forms a parallel channel to the core network. A key advantage of MLFM is its seamless compatibility with various prevalent networks, requiring no alterations to their original core structure. Testing on ImageNet demonstrated substantial accuracy improvements in multiple 2D CNNs, including ResNet, MobileNet, EfficientNet, and ConvNeXt. Furthermore, we showcase MLFM's versatility beyond traditional image classification by successfully integrating it into image-to-image translation tasks, specifically in semantic segmenta-tion networks like FCN and U-Net. In conclusion, our work signifies a pivotal stride in the journey of optimizing the ef-ficacy and efficiency of CNNs with limited resources. This research builds upon the existing CNN foundations and paves the way for future advancements in computer vision. Our codes are available at https://github.com/AlphaWuSeu/MLFM.Downloads
Published
2024-03-24
How to Cite
Wu, F., Wu, J., Kong, Y., Yang, C., Yang, G., Shu, H., Carrault, G., & Senhadji, L. (2024). Multiscale Low-Frequency Memory Network for Improved Feature Extraction in Convolutional Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5967-5975. https://doi.org/10.1609/aaai.v38i6.28411
Issue
Section
AAAI Technical Track on Computer Vision V