M-NAS: Meta Neural Architecture Search

Authors

  • Jiaxing Wang Chinese Academy of Sciences Institute of Automation
  • Jiaxiang Wu Tencent AI Lab
  • Haoli Bai The Chinese University of Hong Kong
  • Jian Cheng Chinese Academy of Sciences Institute of Automation

DOI:

https://doi.org/10.1609/aaai.v34i04.6084

Abstract

Neural Architecture Search (NAS) has recently outperformed hand-crafted networks in various areas. However, most prevalent NAS methods only focus on a pre-defined task. For a previously unseen task, the architecture is either searched from scratch, which is inefficient, or transferred from the one obtained on some other task, which might be sub-optimal. In this paper, we investigate a previously unexplored problem: whether a universal NAS method exists, such that task-aware architectures can be effectively generated? Towards this problem, we propose Meta Neural Architecture Search (M-NAS). To obtain task-specific architectures, M-NAS adopts a task-aware architecture controller for child model generation. Since optimal weights for different tasks and architectures span diversely, we resort to meta-learning, and learn meta-weights that efficiently adapt to a new task on the corresponding architecture with only several gradient descent steps. Experimental results demonstrate the superiority of M-NAS against a number of competitive baselines on both toy regression and few shot classification problems.

Downloads

Published

2020-04-03

How to Cite

Wang, J., Wu, J., Bai, H., & Cheng, J. (2020). M-NAS: Meta Neural Architecture Search. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6186-6193. https://doi.org/10.1609/aaai.v34i04.6084

Issue

Section

AAAI Technical Track: Machine Learning