Multi-objective optimization in hyperparameter optimization (HPO) is a powerful approach that aims to strike a harmonious balance between multiple competing objectives. As different objectives often conflict with each other, traditional single-objective optimization falls short in capturing the intricate trade-offs required for optimal solutions. Enter multi-objective optimization, which empowers HPO by considering multiple objectives simultaneously, such as accuracy, computational efficiency, fairness and model interpretability. By employing advanced algorithms like evolutionary algorithms and Bayesian optimization, multi-objective HPO navigates the hyperparameter landscape, exploring diverse solutions and providing a comprehensive set of options for decision-makers. This enables researchers and practitioners to make informed choices, uncovering the delicate balance between competing objectives and paving the way for improved machine learning models that align with real-world requirements.

Bayesian Optimization

MO-SMAC a MO version that extends SMAC to optimize multiple objectives and also supports multi-fidelity optimization

Evolutionary Algorithms

MO-DEHB is an extension of DEHB that supports multi-objective and multi-fidelity optimization

Other tools

A Python package for visualizing variability of Pareto fronts over multiple runs: a tool for empirical attainment surface to enable the visualization with uncertainty over multiple runs