HEBO Pushing The Limits of Sample-Efficient Hyperparameter Optimisation

From MaRDI portal
Publication:6355481

DOI10.1613/JAIR.1.13643arXiv2012.03826WikidataQ114594306 ScholiaQ114594306MaRDI QIDQ6355481FDOQ6355481

Jan Peters, Alexander I. Cowen-Rivers, Haitham Bou-Ammar, Antoine Grosnit, Rasul Tutunov, Alexandre Max Maraval, Jun Wang, Hao Jianye, Wenlong Lyu, Ryan-Rhys Griffiths, Zhi Wang

Publication date: 7 December 2020

Abstract: In this work we rigorously analyse assumptions inherent to black-box optimisation hyper-parameter tuning tasks. Our results on the Bayesmark benchmark indicate that heteroscedasticity and non-stationarity pose significant challenges for black-box optimisers. Based on these findings, we propose a Heteroscedastic and Evolutionary Bayesian Optimisation solver (HEBO). HEBO performs non-linear input and output warping, admits exact marginal log-likelihood optimisation and is robust to the values of learned parameters. We demonstrate HEBO's empirical efficacy on the NeurIPS 2020 Black-Box Optimisation challenge, where HEBO placed first. Upon further analysis, we observe that HEBO significantly outperforms existing black-box optimisers on 108 machine learning hyperparameter tuning tasks comprising the Bayesmark benchmark. Our findings indicate that the majority of hyper-parameter tuning tasks exhibit heteroscedasticity and non-stationarity, multi-objective acquisition ensembles with Pareto front solutions improve queried configurations, and robust acquisition maximisers afford empirical advantages relative to their non-robust counterparts. We hope these findings may serve as guiding principles for practitioners of Bayesian optimisation. All code is made available at https://github.com/huawei-noah/HEBO.













This page was built for publication: HEBO Pushing The Limits of Sample-Efficient Hyperparameter Optimisation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6355481)