AutoML.org

Freiburg-Hannover-Tübingen

Our seven 2019 papers on neural architecture search (NAS)

By

Neural Architecture Search (NAS) is a very hot topic in AutoML these days, and our group is very actively publishing in this area. We have seven NAS papers in 2019, which may make us one of the world’s most active groups in NAS (only closely surpassed by a small company called Google ;-). Here are the papers and quick blog posts about them:

    • Neural Architecture Search: A Survey: this is the first survey of the modern NAS literature. We survey the field and categorize research into three dimensions: search space, search strategy, and performance estimation strategy. The paper was published by JMLR in 2019, and we’ll update the arXiv version regularly.
    • Efficient Multi-objective NAS via Network Morphisms: in this ICLR 2019 paper, with our collaborators at the Bosch Center for Artificial Intelligence, we introduce a very natural and efficient multi-objective NAS approach. You care about size, test-time speed, performance on different benchmarks? No problem, this is truly multi-objective (no scalarization of multiple objectives as many other works).
    • Learning to Design RNA (aka: AutoRL based on joint NAS & HPO): in this ICLR 2019 paper, we tackle an application problem (RNA design), but to do so we use NAS to optimize the neural architecture of deep RL algorithms. We also jointly optimize other hyperparameters and parameters of the MDP formulation, yielding a hands-off version of PPO that paves the way towards Auto-RL!
    • NAS-Bench-101: in this ICML 2019 paper, together with our collaborators at Google Brain, we introduce the first benchmark for NAS. We exhaustively evaluated a small cell space, now allowing anyone to run NAS experiments on this space in minutes on their laptop, facilitating reproducibility, comparability, and scientific rigor in NAS research!
    • AutoDispNet: Improving Disparity Estimation with AutoML: in this ICCV 2019 paper, we show how to optimize U-Net like architectures for dense regression and optimize their hyperparameters with BOHB, yielding better performance for disparity estimation than expert researchers achieved manually.
    • Best practices for scienctific research on NAS: in this blog post and new arXiv paper, we describe 14 best practices for scientific research, following which we hope will bring the NAS community forward. They’re grouped into 3 best practices concerning the release of code, 8 best practices for comparing different NAS, and 3 best practices for reporting important details. See also our resulting NAS best practice checklist.
    • Understanding and Robustifying Differentiable Architecture Search: in this new arXiv paper, we shed more light into the architecture optimization procedure of DARTS, one of the most popular NAS algorithm over the last year. By extensive empirical evaluations we demonstrate the brittleness of DARTS on many benchmarks and propose fixes that robustify the architecture search process.
Back