LAMDA: Two-Phase HPO via Learning Prior from Low-Fidelity Data

Authors

  • Fan Li Central South University
  • Shengbo Wang University of Electronic Science and Technology of China
  • Ke Li University of Exeter

DOI:

https://doi.org/10.1609/aaai.v40i43.41030

Abstract

Hyperparameter Optimization (HPO) is crucial in machine learning, aiming to optimize hyperparameters to enhance model performance. Although existing methods that leverage prior knowledge—drawn from either previous experiments or expert insights—can accelerate optimization, acquiring a correct prior for a specific HPO task is non-trivial. In this work, we propose to relieve the reliance on external knowledge by learning a reliable prior {directly} from low-fidelity (LF) problems. We introduce {Lamda}, an algorithm-agnostic framework designed to boost any baseline HPO algorithm. Specifically, {Lamda} operates in two phases: (1) it learns a reliable prior by exploring the LF landscape under limited computational budgets, and (2) it leverages this learned prior to guide the HPO process. We showcase how the {Lamda} framework can be integrated with various HPO algorithms to boost their performance, and further conduct theoretical analysis towards the integrated Bayesian optimization and bandit-based Hyperband. We conduct experiments on 56 HPO problems spanning diverse domains and model scales. Results show that {Lamda} consistently enhances its baseline algorithms. Compared to nine state-of-the-art HPO algorithms, our {Lamda} variant achieves the best performance in 51 out of 56 HPO tasks while it is the second best algorithm in the other 5 cases.

Downloads

Published

2026-03-14

How to Cite

Li, F., Wang, S., & Li, K. (2026). LAMDA: Two-Phase HPO via Learning Prior from Low-Fidelity Data. Proceedings of the AAAI Conference on Artificial Intelligence, 40(43), 37018–37026. https://doi.org/10.1609/aaai.v40i43.41030

Issue

Section

AAAI Technical Track on Search and Optimization