Neural Architecture Search (NAS) automates the process of architecture design of neural networks. NAS approaches optimize the topology of the networks, incl. how to connect nodes and which operators to choose. User-defined optimization metrics can thereby include accuracy, model size or inference time to arrive at an optimal architecture for specific applications. Due to the extremely large search space, traditional evolution or reinforcement learning-based AutoML algorithms tend to be computationally expensive. Hence recent research on the topic has focused on exploring more efficient ways for NAS. In particular, recently developed gradient-based and multi-fidelity methods have provided a promising path and boosted research in these directions. Our group has been very active in developing state of the art NAS methods and has been at the forefront of driving NAS research forward. We give a summary of a few recent important work released from our group –
Open-Source Libraries
NASLib
NASLib is a modular and flexible Neural Architecture Search (NAS) library. Its purpose is to facilitate NAS research in the community and allow for fair comparisons of diverse recent NAS methods by providing a common modular, flexible and extensible codebase. The library also provides an easy-to-use interface with the popular NAS benchmarks (e.g. NASBench 101, 201, 301), with various discrete (e.g. Random Search, Regularized Evolution) and one-shot (DARTS, GDAS, DrNAS, etc.) optimizers already integrated in.
The library currently has an active, strong, developer community who are maintaining it along with constantly adding new benchmarks, optimizers and search spaces into the code base.
Auto-PyTorch
Based on the well-known DL framework PyTorch, Auto-PyTorch automatically optimizes both the neural architecture and the hyperparameter configuration. To this end, Auto-PyTorch combines ideas from efficient multi-fidelity optimization, meta-learning and ensembling.
NAS Benchmarks
Research on NAS is often very expensive because training and evaluating a single deep neural network might require between minutes or even days. Therefore, we provide several benchmark packages for NAS that either provide tabular or surrogate benchmarks, allowing efficient research on NAS.
- NAS-Bench-101 [PMLR 2019]
- NAS-Bench-1shot1 [ICLR 2020]
- NAS-Bench-301 [NeurIPS 2020, Meta-Learning Workshop]
- LC-Bench [IEEE Transactions on Pattern Analysis and Machine Intelligence 2020]
- NAS-Bench-x11 [NeurIPS 2021]
- Surrogate NAS benchmarks [ICLR 2022]
- NAS-Bench-Suite [ICLR 2022]
Best practices for NAS Research
The rapid development of new NAS approaches makes it hard to compare these against each other. To ensure reliable and reproducible results, we also provide best practices for scientific research on NAS and our checklist for new NAS papers. In our CVPR 2021 NAS Workshop paper we discuss some practical considerations that help improve the stability, efficiency and overall performance of neural architecture search methods.
Selected NAS Papers
Literature Overview
NAS is one of the booming subfields of AutoML and the number of papers is quickly increasing. To provide a comprehensive overview of the recent trends, we provide the following sources:
- NAS survey paper [JMLR 2020]
- A book chapter on NAS from our open-access book, “AutoML: Methods, System, Challengers”
- A continuously updated page with a comprehensive NAS literature overview
- A github repo keeping track of the recent work at the intersection of NAS and Transformers awesome-transformer-search
One-Shot NAS Methods
- Understanding and Robustifying Differentiable Architecture Search [ICLR 2020, Oral]
Meta Learning of Neural Architectures
Neural Ensemble Search
-
Neural Ensemble Search for Uncertainty Estimation and Dataset Shift [NeurIPS 2021]
-
Multi-headed Neural Ensemble Search [ICML 2021, UDL Workshop]
Joint NAS and Hyperparameter Optimization
- Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search [ICML 2018, AutoML Workshop]
- Bag of Baselines for Multi-objective Joint Neural Architecture Search and Hyperparameter Optimization [ICML 2021, AutoML Workshop]
Multi-Objective NAS
- LEMONADE: Efficient multi-objective neural architecture search via lamarckian evolution [ICLR 2019]
- Bag of Baselines for Multi-objective Joint Neural Architecture Search and Hyperparameter Optimization [ICML 2021, AutoML Workshop]