AutoML.org

Freiburg-Hannover

CARL: A benchmark to study generalization in Reinforcement Learning

TL;DR: CARL is a benchmark for contextual RL (cRL). In cRL, we aim to generalize over different contexts. In CARL we saw that if we vary the context, the learning becomes more difficult, and making the context explicit can facilitate learning. CARL makes the context defining the behavior of the environment visible and configurable. This […]

Read More

HPOBench: Compare Multi-fidelity Optimization Algorithms with Ease

When researching and developing new hyperparameter optimization (HPO) methods, a good collection of benchmark problems, ideally relevant, realistic and cheap-to-evaluate, is a very valuable resource. While such collections exist for synthetic problems (COCO) or simple HPO problems (Bayesmark), to the best of our knowledge there is no such collection for multi-fidelity benchmarks. With ever-growing machine […]

Read More

TrivialAugment: You don’t need to tune your augmentations for image classification

Strong image classification models need augmentations. That is consensus in the community for a few years now. Some augmentation choices became standard over the time for some datasets, but the question what augmentations strategy is optimal for a given dataset remained. This opened the opportunity of doing hyper-parameter optimization (HPO) to find optimal augmentation choices. […]

Read More

Self-Paced Context Evaluation for Contextual Reinforcement Learning

RL agents, just like humans, often benefit from a difficulty curve in learning [Matiisen et al. 2017, Fuks et al. 2019, Zhang et al. 2020]. Progressing from simple task instances, e.g. walking on flat surfaces or towards goals that are very close to the agent, to more difficult ones lets the agent accomplish much harder […]

Read More

DACBench: Benchmarking Dynamic Algorithm Configuration

Dynamic Algorithm Configuration (DAC) has been shown to significantly improve algorithm performance over static or even handcrafted dynamic hyperparameter policies [Biedenkapp et al., 2020]. Most algorithms, however, are not designed with DAC in mind and have to be adapted to be controlled online. This requires a great deal of familiarity with the target algorithm as […]

Read More

AutoRL: AutoML for RL

Reinforcement learning (RL) has shown impressive results in a variety of applications. Well known examples include game and video game playing, robotics and, recently, “Autonomous navigation of stratospheric balloons”. A lot of the successes came about by combining the expressiveness of deep learning with the power of RL. Already on their own though, both frameworks […]

Read More


AutoML adoption in software engineering for machine learning

By Koen van der Blom, Holger Hoos, Alex Serban, Joost Visser In our global survey among teams that build ML applications, we found ample room for increased adoption of AutoML techniques. While AutoML is adopted at least partially by more than 70% of teams in research labs and tech companies, for teams in non-tech and […]

Read More

Auto-PyTorch: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL

By Auto-PyTorch is a framework for automated deep learning (AutoDL) that uses BOHB as a backend to optimize the full deep learning pipeline, including data preprocessing, network training techniques and regularization methods. Auto-PyTorch is the successor of AutoNet which was one of the first frameworks to perform this joint optimization.

Read More

NAS-Bench-301 and the Case for Surrogate NAS Benchmarks

The Need for Realistic NAS Benchmarks Neural Architecture Search (NAS) is a logical next step in representation learning as it removes human bias from architecture design, similar to deep learning removing human bias from feature engineering. As such, NAS has experienced rapid growth in recent years, leading to state-of-the-art performance on many tasks. However, empirical […]

Read More