AutoML.org

Freiburg-Hannover-Tübingen

Contextualize Me – The Case for Context in Reinforcement Learning

Carolin Benjamins, Theresa Eimer, Frederik Schubert, Aditya Mohan, Sebastian Döhler, André Biedenkapp, Bodo Rosenhahn, Frank Hutter and Marius Lindauer TLDR: We can model and investigate generalization in RL with contextual RL and our benchmark library CARL. In theory, without adding context we cannot achieve optimal performance and in the experiments we saw that using context […]

Read More

Hyperparameter Tuning in Reinforcement Learning is Easy, Actually

Hyperparameter Optimization tools perform well on Reinforcement Learning, outperforming Grid Searches with less than 10% of the budget. If not reported correctly, however, all hyperparameter tuning can heavily skew future comparisons.

Read More

Self-Paced Context Evaluation for Contextual Reinforcement Learning

RL agents, just like humans, often benefit from a difficulty curve in learning [Matiisen et al. 2017, Fuks et al. 2019, Zhang et al. 2020]. Progressing from simple task instances, e.g. walking on flat surfaces or towards goals that are very close to the agent, to more difficult ones lets the agent accomplish much harder […]

Read More

DACBench: Benchmarking Dynamic Algorithm Configuration

Dynamic Algorithm Configuration (DAC) has been shown to significantly improve algorithm performance over static or even handcrafted dynamic hyperparameter policies [Biedenkapp et al., 2020]. Most algorithms, however, are not designed with DAC in mind and have to be adapted to be controlled online. This requires a great deal of familiarity with the target algorithm as […]

Read More