The Master Key Filters Hypothesis: Deep Filters Are General

Authors

  • Zahra Babaiee Technische Universität Wien
  • Peyman M. Kiasari Technische Universität Wien
  • Daniela Rus Massachusetts Institute of Technology
  • Radu Grosu Technische Universität Wien

DOI:

https://doi.org/10.1609/aaai.v39i2.32175

Abstract

This paper challenges the prevailing view that convolutional neural network (CNN) filters become increasingly specialized in deeper layers. Motivated by recent observations of clusterable repeating patterns in depthwise separable CNNs (DS-CNNs) trained on ImageNet, we extend this investigation across various domains and datasets. Our analysis of DS-CNNs reveals that deep filters maintain generality, contradicting the expected transition to class-specific features. We demonstrate the generalizability of these filters through transfer learning experiments, showing that frozen filters from models trained on different datasets perform well and can be further improved when sourced from larger, better-performing models. Our findings indicate that spatial features learned by depthwise separable convolutions remain generic across all layers, domains, and architectures. This research provides new insights into the nature of generalization in neural networks, particularly in DS-CNNs, and has significant implications for transfer learning and model design.

Published

2025-04-11

How to Cite

Babaiee, Z., Kiasari, P. M., Rus, D., & Grosu, R. (2025). The Master Key Filters Hypothesis: Deep Filters Are General. Proceedings of the AAAI Conference on Artificial Intelligence, 39(2), 1809–1816. https://doi.org/10.1609/aaai.v39i2.32175

Issue

Section

AAAI Technical Track on Computer Vision I