Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization

From MaRDI portal
Revision as of 13:32, 5 September 2024 by Import240905100929 (talk | contribs) (Created automatically from import240905100929)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:6342430

arXiv2006.05078MaRDI QIDQ6342430

Eytan Bakshy, Maximilian Balandat, Samuel Daulton

Publication date: 9 June 2020

Abstract: In many real-world scenarios, decision makers seek to efficiently optimize multiple competing objectives in a sample-efficient fashion. Multi-objective Bayesian optimization (BO) is a common approach, but many of the best-performing acquisition functions do not have known analytic gradients and suffer from high computational overhead. We leverage recent advances in programming models and hardware acceleration for multi-objective BO using Expected Hypervolume Improvement (EHVI)---an algorithm notorious for its high computational complexity. We derive a novel formulation of q-Expected Hypervolume Improvement (qEHVI), an acquisition function that extends EHVI to the parallel, constrained evaluation setting. qEHVI is an exact computation of the joint EHVI of q new candidate points (up to Monte-Carlo (MC) integration error). Whereas previous EHVI formulations rely on gradient-free acquisition optimization or approximated gradients, we compute exact gradients of the MC estimator via auto-differentiation, thereby enabling efficient and effective optimization using first-order and quasi-second-order methods. Our empirical evaluation demonstrates that qEHVI is computationally tractable in many practical scenarios and outperforms state-of-the-art multi-objective BO algorithms at a fraction of their wall time.




Has companion code repository: https://github.com/pytorch/botorch







This page was built for publication: Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization