NASTransfer: Analyzing Architecture Transferability in Large Scale Neural Architecture Search

Authors

  • Rameswar Panda IBM Research MIT-IBM Watson AI Lab
  • Michele Merler IBM Research
  • Mayoore S Jaiswal IBM Research
  • Hui Wu IBM Research MIT-IBM Watson AI Lab
  • Kandan Ramakrishnan IBM Research
  • Ulrich Finkler IBM Research
  • Chun-Fu Richard Chen IBM Research MIT-IBM Watson AI Lab
  • Minsik Cho IBM Research
  • Rogerio Feris IBM Research MIT-IBM Watson AI Lab
  • David Kung IBM Research
  • Bishwaranjan Bhattacharjee IBM Research

Keywords:

Transfer/Adaptation/Multi-task/Meta/Automated Learning

Abstract

Neural Architecture Search (NAS) is an open and challenging problem in machine learning. While NAS offers great promise, the prohibitive computational demand of most of the existing NAS methods makes it difficult to directly search the architectures on large-scale tasks. The typical way of conducting large scale NAS is to search for an architectural building block on a small dataset (either using a proxy set from the large dataset or a completely different small scale dataset) and then transfer the block to a larger dataset. Despite a number of recent results that show the promise of transfer from proxy datasets, a comprehensive evaluation of different NAS methods studying the impact of different source datasets has not yet been addressed. In this work, we propose to analyze the architecture transferability of different NAS methods by performing a series of experiments on large scale benchmarks such as ImageNet1K and ImageNet22K. We find that: (i) The size and domain of the proxy set does not seem to influence architecture performance on the target dataset. On average, transfer performance of architectures searched using completely different small datasets (e.g., CIFAR10) perform similarly to the architectures searched directly on proxy target datasets. However, design of proxy sets has considerable impact on rankings of different NAS methods. (ii) While different NAS methods show similar performance on a source dataset (e.g., CIFAR10), they significantly differ on the transfer performance to a large dataset (e.g., ImageNet1K). (iii) Even on large datasets, random sampling baseline is very competitive, but the choice of the appropriate combination of proxy set and search strategy can provide significant improvement over it. We believe that our extensive empirical analysis will prove useful for future design of NAS algorithms.

Downloads

Published

2021-05-18

How to Cite

Panda, R., Merler, M., Jaiswal, M. S., Wu, H., Ramakrishnan, K., Finkler, U., Chen, C.-F. R., Cho, M., Feris, R., Kung, D., & Bhattacharjee, B. (2021). NASTransfer: Analyzing Architecture Transferability in Large Scale Neural Architecture Search. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 9294-9302. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17121

Issue

Section

AAAI Technical Track on Machine Learning III