Scalable Bayesian optimization with randomized prior networks
From MaRDI portal
Publication:6153885
Abstract: Several fundamental problems in science and engineering consist of global optimization tasks involving unknown high-dimensional (black-box) functions that map a set of controllable variables to the outcomes of an expensive experiment. Bayesian Optimization (BO) techniques are known to be effective in tackling global optimization problems using a relatively small number objective function evaluations, but their performance suffers when dealing with high-dimensional outputs. To overcome the major challenge of dimensionality, here we propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors. Using appropriate architecture choices, we show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces. In the context of BO, we augmented the proposed probabilistic surrogates with re-parameterized Monte Carlo approximations of multiple-point (parallel) acquisition functions, as well as methodological extensions for accommodating black-box constraints and multi-fidelity information sources. We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs, including a constrained multi-fidelity optimization task involving shape optimization of rotor blades in turbo-machinery.
Recommendations
- High-dimensional Bayesian optimization using low-dimensional feature spaces
- Taking another step: a simple approach to high-dimensional Bayesian optimization
- Deep Gaussian process for multi-objective Bayesian optimization
- Bayesian optimization in a billion dimensions via random embeddings
- Scalable Bayesian optimization with generalized product of experts
Cites work
- scientific article; zbMATH DE number 6433488 (Why is no real title available?)
- scientific article; zbMATH DE number 6276166 (Why is no real title available?)
- A Knowledge-Gradient Policy for Sequential Information Collection
- A Review of Modern Computational Algorithms for Bayesian Optimal Design
- Bayesian differential programming for robust systems identification under uncertainty
- Bayesian optimization with output-weighted optimal sampling
- Calibration of a radiation quality model for sparse and uncertain data
- Introduction to Shape Optimization
- On the quantification of aleatory and epistemic uncertainty using sliced-normal distributions
- Optimal control
- Random forests
- Random variables with moment-matching staircase density functions
Cited in
(6)- High-dimensional Bayesian optimization using low-dimensional feature spaces
- Scalable Bayesian optimization with generalized product of experts
- A robust multi-objective Bayesian optimization framework considering input uncertainty
- Taking another step: a simple approach to high-dimensional Bayesian optimization
- Multi-fidelity cost-aware Bayesian optimization
- Deep Gaussian process for multi-objective Bayesian optimization
This page was built for publication: Scalable Bayesian optimization with randomized prior networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6153885)