Neural Architecture Search

Neural Architecture Search (NAS) automates the process of architecture design by developing approaches that search for a well-performing architecture of a deep neural network (including the number of layers, number of neurons, the type of activation functions, and many more design decisions) which optimizes user-defined metrics such as accuracy, model size or inference time to arrive at an optimal architecture and hyperparameter configuration setting. Due to the extremely large search space, traditional evolution or reinforcement learning-based AutoML algorithms tend to be computationally expensive. Hence recent research on the topic has focused on exploring more efficient ways for architecture search. In particular, recently developed gradient-based methods have provided a promising path and boosted research in this direction.

Literature on NAS

  • NAS papers from our group
  • Also check the corresponding chapter from the book, “AutoML: Methods, System, Challengers”
  • Comprehensive literature overview

Packages from our Group

Best practices NAS

We also provide best practices for scientific research on NAS and our checklist.

Current NAS Researchers and Research Focus

We have been focusing on gradient-based NAS, multi-objective NAS, and applied NAS to various real-world problems such as finding efficient architectures for deep learning-based EEG brain signals decoding.

If you are interested/have ideas/collaborate on individual projects, reach out to the relevant researchers from our group working on NAS:

Arber Zela, Yash Mehta, Mahmoud Safari, Michael Reutche