Scalable Gaussian process-based transfer surrogates for hyperparameter optimization
From MaRDI portal
Publication:1707465
DOI10.1007/S10994-017-5684-YzbMath1457.68242OpenAlexW2780562800WikidataQ59866136 ScholiaQ59866136MaRDI QIDQ1707465
Lars Schmidt-Thieme, Martin Wistuba, Nicolas Schilling
Publication date: 3 April 2018
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-017-5684-y
Nonparametric regression and quantile regression (62G08) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (5)
Large scale multi-output multi-class classification using Gaussian processes ⋮ AutonoML: Towards an Integrated Framework for Autonomous Machine Learning ⋮ Scalable Gaussian process-based transfer surrogates for hyperparameter optimization ⋮ Dataset2Vec: learning dataset meta-features ⋮ Benchmark and Survey of Automated Machine Learning Frameworks
Uses Software
Cites Work
- Pairwise meta-rules for better meta-learning-based algorithm ranking
- Statistical comparison of classifiers through Bayesian hierarchical modelling
- Efficient global optimization of expensive black-box functions
- Efficient benchmarking of algorithm configurators via model-based surrogates
- Scalable Gaussian process-based transfer surrogates for hyperparameter optimization
- Speeding up algorithm selection using average ranking and active testing by introducing runtime
- Metalearning
- ODEXPERT: an expert system to select numerical solvers for initial value ODE systems
- A NEW MEASURE OF RANK CORRELATION
- A Comparison of Alternative Tests of Significance for the Problem of $m$ Rankings
- Model selection for small sample regression
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Scalable Gaussian process-based transfer surrogates for hyperparameter optimization