HyperSHAP: Shapley Values and Interactions for Explaining Hyperparameter Optimization

Authors

  • Marcel Wever L3S Research Center, Leibniz University Hannover
  • Maximilian Muschalik MCML, LMU Munich
  • Fabian Fumagalli MCML, LMU Munich
  • Marius Lindauer L3S Research Center, Leibniz University Hannover

DOI:

https://doi.org/10.1609/aaai.v40i32.39898

Abstract

Hyperparameter optimization (HPO) is a crucial step in achieving strong predictive performance. Yet, the impact of individual hyperparameters on model generalization is highly context-dependent, prohibiting a one-size-fits-all solution and requiring opaque HPO methods to find optimal configurations. However, the black-box nature of most HPO methods undermines user trust and discourages adoption. To address this, we propose a game-theoretic explainability framework for HPO based on Shapley values and interactions. Our approach provides an additive decomposition of a performance measure across hyperparameters, enabling local and global explanations of hyperparameters' contributions and their interactions. The framework, named HyperSHAP, offers insights into ablation studies, the tunability of learning algorithms, and optimizer behavior across different hyperparameter spaces. We demonstrate HyperSHAP's capabilities on various HPO benchmarks to analyze the interaction structure of the corresponding HPO problems, demonstrating its broad applicability and actionable insights for improving HPO.

Published

2026-03-14

How to Cite

Wever, M., Muschalik, M., Fumagalli, F., & Lindauer, M. (2026). HyperSHAP: Shapley Values and Interactions for Explaining Hyperparameter Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 40(32), 26867–26875. https://doi.org/10.1609/aaai.v40i32.39898

Issue

Section

AAAI Technical Track on Machine Learning IX