Optimal learning for nonlinear parametric belief models over multidimensional continuous spaces
From MaRDI portal
Publication:4554064
Recommendations
- Optimal learning with local nonlinear parametric models over continuous designs
- Optimal learning with a local parametric belief model
- Optimal Learning for Stochastic Optimization with Nonlinear Parametric Belief Models
- Convergence rates of efficient global optimization algorithms
- Optimal learning for sequential sampling with non-parametric beliefs
Cites work
- scientific article; zbMATH DE number 3151196 (Why is no real title available?)
- scientific article; zbMATH DE number 1846041 (Why is no real title available?)
- A Knowledge-Gradient Policy for Sequential Information Collection
- A comparison of evolution strategies with other direct search methods in the presence of noise
- A direct search algorithm for optimization with noisy function evaluations
- A method of trust region type for minimizing noisy functions
- A survey on metaheuristics for stochastic combinatorial optimization
- Adaptation and tracking in system identification - a survey
- Approximate dynamic programming. Solving the curses of dimensionality
- Bayesian look ahead one-stage sampling allocations for selection of the best population
- Constrained global optimization of expensive black box functions using radial basis functions
- Convergence results for single-step on-policy reinforcement-learning algorithms
- Dynamic sampling allocation and design selection
- Finite-time analysis of the multiarmed bandit problem
- Generalized Poisson Models and their Applications in Insurance and Finance
- Geometry of interpolation sets in derivative free optimization
- Handbooks in operations research and management science: Simulation
- Introduction to Stochastic Search and Optimization
- Iterative learning control for deterministic systems
- Learning to optimize via posterior sampling
- ORBIT: Optimization by Radial Basis Function Interpolation in Trust-Regions
- On stable learning in dynamic oligopolies
- On the solution of stochastic optimization and variational problems in imperfect information regimes
- Optimal Learning for Stochastic Optimization with Nonlinear Parametric Belief Models
- Optimal learning in experimental design using the knowledge gradient policy with application to characterizing nanoemulsion stability
- SO-MI: a surrogate model algorithm for computationally expensive nonlinear mixed-integer black-box global optimization problems
- Sequential sampling to myopically maximize the expected value of information
- Simulated annealing in the presence of noise
- Simulation allocation for determining the best design in the presence of correlated sampling
- Simulation budget allocation for further enhancing the efficiency of ordinal optimization
- Stochastic Estimation of the Maximum of a Regression Function
- Stochastic approximation methods for constrained and unconstrained systems
- The correlated knowledge gradient for simulation optimization of continuous parameters using Gaussian process regression
- The knowledge-gradient algorithm for sequencing experiments in drug discovery
- The knowledge-gradient policy for correlated normal beliefs
- UOBYQA: unconstrained optimization by quadratic approximation
- Why are Normal Distributions Normal?
Cited in
(6)- Optimal learning with local nonlinear parametric models over continuous designs
- Optimal Learning for Stochastic Optimization with Nonlinear Parametric Belief Models
- Prior Knowledge in Learning Finite Parameter Spaces
- Optimal learning for sequential sampling with non-parametric beliefs
- Optimal learning with a local parametric belief model
- Optimal online learning for nonlinear belief models using discrete priors
This page was built for publication: Optimal learning for nonlinear parametric belief models over multidimensional continuous spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4554064)