AutoML.org

Freiburg-Hannover-Tübingen

DACBench: Benchmarking Dynamic Algorithm Configuration

Dynamic Algorithm Configuration (DAC) has been shown to significantly improve algorithm performance over static or even handcrafted dynamic hyperparameter policies [Biedenkapp et al., 2020]. Most algorithms, however, are not designed with DAC in mind and have to be adapted to be controlled online. This requires a great deal of familiarity with the target algorithm as well as a background in DAC itself. Additionally, there are many design decisions involved in creating DAC benchmarks, resulting in several different ways to apply DAC to the same algorithm. These factors severely limit the number of DAC benchmarks while at the same time posing a challenge for reproducing existing experiments. To help the field progress through a number of diverse initial benchmarks, a template to facilitate the creation of new ones and a lower barrier of entry due to an easy-to-use, domain independent interface, we created DACBench, a benchmark library for DAC.

A Template Interface for Benchmarks

There are several possibilities of both how to model DAC problems and how to solve them. Therefore the creation of any benchmark depends both on domain knowledge and knowledge of the proposed solution method. This also hurts comparability between algorithms as a Reinforcement Learning method could solve the same DAC problem as an Algorithm Configuration tool like SMAC, but their interfaces may look very different. This makes it hard to determine which approach performs better. Therefore we believe a standardized interface can benefit researchers developing new DAC approaches by providing an easy way to measure its effectiveness against other methods. The same goes for domain experts that lack the expertise to model their algorithm as a DAC problem without a template to guide them. In order to keep this interface as general as possible, we decided to model DAC as a contextual MDP as proposed by Biedenkapp et al. 2020. We extend the well known OpenAI gym interface [Brockman et al., 2016] as it is easy to use and straightforward to implement. Figure 1 shows the interaction of a DAC solution approach with the components of DACBench benchmarks.

Figure 1: Interactions between DACBench components and DAC solvers.

The Six Initial Benchmarks

As a first set of benchmarks, we collected previously known DAC applications. They include two function approximation tasks, controlling the heuristic in AI Planning, adapting the learning rate of a neural network as well as setting either the step-size or the combinations of heuristics of CMA-ES [Daniel et al 2016, Biedenkapp et al. 2020, Shala et al. 2020, Speck et al. 2021]. Combined, they cover many different aspects of DAC benchmarks, providing an interesting cross section of the problem space (see Figure 2).

Figure 2: DACBench benchmarks and their difficulty in different dimensions.

Additionally, they have very different difficulties, from completely solved toy problems to open research questions with a large space of possible solutions. Therefore even the initial DACBench set should provide a challenge for any DAC approach as well as detailed information about the approach’s strengths and weaknesses. For more detailed information on the initial benchmarks and their attributes, see our IJCAI 2021 paper or the video presentation:

DACBench: a Collaborative Effort

While we provide a set of initial benchmarks in DACBench version 0.0.1, there is an ongoing effort to extend DACBench to new domains. Since its release, experts from different fields have come together to improve benchmarks and connect them to existing ones in their respective domain, an example being IOHProfiler [Doerr et al., 2018] that is now used for our evolutionary computing benchmarks. Or we consider completely new domains, such as the theory-driven “OneLL” benchmark. We hope DACBench can continue to bring experts from target domains together and foster collaboration between the fields.

References

[Biedenkapp et al., 2020] A. Biedenkapp, H. F. Bozkurt, T. Eimer, F. Hutter and M. Lindauer. Dynamic Algorithm Configuration: Foundation of a New Meta-Algorithmic Framework.  In Proceedings of ECAI, pages 427–434, 2020

[Brockman et al., 2016] G. Brockman,  V. Cheung,  L. Pettersson, J. Schneider,  J. Schulman,  J. Tang and W. Zaremba. OpenAIGym. CoRR, abs/1606.01540, 201

[Speck et al., 2021] D.Speck, A.Biedenkapp, F.Hutter, R. Mattmüller and M. Lindauer. Learning heuristic selection  with dynamic algorithm configuration. In Proceedings of ICAPS’21, August 2021.

[Daniel et al., 2016] C. Daniel, J. Taylor and S. Nowozin. Learning step size controllers for robust neural network training.  In Proceedings of AAAI, 2016

[Shala et al., 2020] G.  Shala,  A.  Biedenkapp,  N.  Awad,  S.  Adriaensen, M. Lindauer and F. Hutter. Learning step-size adaptation in CMA-ES. In Proceedings of PPSN, pages 691–706, 202

[Doerr et al., 2018] C. Doerr,  H. Wang,  F.  Ye,  S. van  Rijn and T. Bäck.  IOHprofiler: A benchmarking and profiling tool for iterative optimization heuristics. arXiv e-prints:1810.05281, 2018

Back