AutoML adoption in software engineering for machine learning
Posted on December 10, 2020 by
Guest
By Koen van der Blom, Holger Hoos, Alex Serban, Joost Visser In our global survey among teams that build ML applications, we found ample room for increased adoption of AutoML techniques. While AutoML is adopted at least partially by more than 70% of teams in research labs and tech companies, for teams in non-tech and […]
Read More
Auto-PyTorch: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL
By Auto-PyTorch is a framework for automated deep learning (AutoDL) that uses BOHB as a backend to optimize the full deep learning pipeline, including data preprocessing, network training techniques and regularization methods. Auto-PyTorch is the successor of AutoNet which was one of the first frameworks to perform this joint optimization.
Read More
NAS-Bench-301 and the Case for Surrogate NAS Benchmarks
The Need for Realistic NAS Benchmarks Neural Architecture Search (NAS) is a logical next step in representation learning as it removes human bias from architecture design, similar to deep learning removing human bias from feature engineering. As such, NAS has experienced rapid growth in recent years, leading to state-of-the-art performance on many tasks. However, empirical […]
Read More
Learning Step-Size Adaptation in CMA-ES
In a Nutshell In CMA-ES, the step size controls how fast or slow a population traverses through a search space. Large steps allow you to quickly skip over uninteresting areas (exploration), whereas small steps allow a more focused traversal of interesting areas (exploitation). Handcrafted heuristics usually trade off small and large steps given some measure […]
Read More
Playing Games with Progressive Episode Lengths
By A framework of ES-based limited episode’s length
Read More
Auto-Sklearn 2.0: The Next Generation
Since our initial release of auto-sklearn 0.0.1 in May 2016 and the publication of the NeurIPS paper “Efficient and Robust Automated Machine Learning” in 2015, we have spent a lot of time on maintaining, refactoring and improving code, but also on new research. Now, we’re finally ready to share the next version of our flagship […]
Read More
Dynamic Algorithm Configuration
By Motivation When designing algorithms we want them to be as flexible as possible such that they can solve as many problems as possible. To solve a specific family of problems well, finding well-performing hyperparameter configurations requires us to either use extensive domain knowledge or resources. The second point is especially true if we […]
Read More
Our seven 2019 papers on neural architecture search (NAS)
By Neural Architecture Search (NAS) is a very hot topic in AutoML these days, and our group is very actively publishing in this area. We have seven NAS papers in 2019, which may make us one of the world’s most active groups in NAS (only closely surpassed by a small company called Google ;-). Here […]
Read More
RobustDARTS
By Understanding and Robustifying Differentiable Architecture Search Optimizing in the search of neural network architectures was initially defined as a discrete problem which intrinsically required to train and evaluate thousands of networks. This of course required huge amount of computational power, which was only possible for few institutions. One-shot neural architecture search (NAS) democratized this […]
Read More
Best Practices for Scientific Research on Neural Architecture Search
By Neural architecture search (NAS) is currently one of the hottest topics in automated machine learning (see AutoML book), with a seemingly exponential increase in the number of papers written on the subject, see the figure above. While many NAS methods are fascinating (please see our survey article for an overview of the main trends […]
Read More