HPO speedup with learning curve extrapolation

In the context of multi-fidelity HPO, each fidelity yields a different learning curve. An example of learning curve is the progression of accuracies over training epochs. In practice, human experts commonly rely on the observed learning curve to stop poor configurations. This line of research aims to exploit the same information to speed up HPO methods. Concretely, we propose Bayesian models to extrapolate the observed learning curve to higher fidelities. By estimating the performance of configurations at higher fidelities, we can effectively discard unpromising runs.

Our Works