Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery

Authors

  • Zimian Wei National University of Defense Technology
  • Peijie Dong The Hong Kong University of Science and Technology (Guangzhou)
  • Zheng Hui Columbia University
  • Anggeng Li Huawei Technologies Ltd.
  • Lujun Li The Hong Kong University of Science and Technology
  • Menglong Lu National University of Defense Technology
  • Hengyue Pan National University of Defense Technology
  • Dongsheng Li National University of Defense Technology

DOI:

https://doi.org/10.1609/aaai.v38i14.29511

Keywords:

ML: Auto ML and Hyperparameter Tuning, CV: Learning & Optimization for CV, CV: Applications, ML: Deep Neural Architectures and Foundation Models, CV: Representation Learning for Vision, CV: Other Foundations of Computer Vision

Abstract

The substantial success of Vision Transformer (ViT) in computer vision tasks is largely attributed to the architecture design. This underscores the necessity of efficient architecture search for designing better ViTs automatically. As training-based architecture search methods are computationally intensive, there’s a growing interest in training-free methods that use zero-cost proxies to score ViTs. However, existing training-free approaches require expert knowledge to manually design specific zero-cost proxies. Moreover, these zero-cost proxies exhibit limitations to generalize across diverse domains. In this paper, we introduce Auto-Prox, an automatic proxy discovery framework, to address the problem. First, we build the ViT-Bench-101, which involves different ViT candidates and their actual performance on multiple datasets. Utilizing ViT-Bench-101, we can evaluate zero-cost proxies based on their score-accuracy correlation. Then, we represent zero-cost proxies with computation graphs and organize the zero-cost proxy search space with ViT statistics and primitive operations. To discover generic zero-cost proxies, we propose a joint correlation metric to evolve and mutate different zero-cost proxy candidates. We introduce an elitism-preserve strategy for search efficiency to achieve a better trade-off between exploitation and exploration. Based on the discovered zero-cost proxy, we conduct a ViT architecture search in a training-free manner. Extensive experiments demonstrate that our method generalizes well to different datasets and achieves state-of-the-art results both in ranking correlation and final accuracy. Codes can be found at https://github.com/lilujunai/Auto-Prox-AAAI24.

Published

2024-03-24

How to Cite

Wei, Z., Dong, P., Hui, Z., Li, A., Li, L., Lu, M., Pan, H. ., & Li, D. (2024). Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15814-15822. https://doi.org/10.1609/aaai.v38i14.29511

Issue

Section

AAAI Technical Track on Machine Learning V