Stochastic derivative-free optimization using a trust region framework
From MaRDI portal
(Redirected from Publication:301671)
Recommendations
- Stochastic optimization using a trust-region method and random models
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- A stochastic trust region method for unconstrained optimization problems
- Derivative-free optimization via proximal point methods
- A derivative-free algorithm based on simple model for unconstrained optimization
Cites work
- scientific article; zbMATH DE number 3067118 (Why is no real title available?)
- A Simplex Method for Function Minimization
- An adaptive Monte Carlo algorithm for computing mixed logit estimators
- Benchmarking Derivative-Free Optimization Algorithms
- Benchmarking optimization software with performance profiles.
- Convergence of trust-region methods based on probabilistic models
- Derivative-free optimization of expensive functions with computational error using weighted regression
- Estimating Computational Noise
- Estimating derivatives of noisy simulations
- Introduction to Derivative-Free Optimization
- Introduction to Stochastic Search and Optimization
- Lipschitzian optimization without the Lipschitz constant
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- On sampling controlled stochastic approximation
- Stochastic Estimation of the Maximum of a Regression Function
- Stochastic optimization using a trust-region method and random models
- The effect of deterministic noise in subgradient methods
- UOBYQA: unconstrained optimization by quadratic approximation
Cited in
(33)- Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques
- Newton-type methods for non-convex optimization under inexact Hessian information
- A stochastic Levenberg-Marquardt method using random models with complexity results
- Convergence of Newton-MR under inexact Hessian information
- Manifold sampling for \(\ell_1\) nonconvex optimization
- Expected decrease for derivative-free algorithms using random subspaces
- A stochastic trust region method for unconstrained optimization problems
- Survey of derivative-free optimization
- Surrogate-based promising area search for Lipschitz continuous simulation optimization
- Stochastic optimization using a trust-region method and random models
- Derivative-free optimization methods
- Constrained stochastic blackbox optimization using a progressive barrier and probabilistic estimates
- Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- A derivative-free trust-region algorithm for the optimization of functions smoothed via Gaussian convolution using adaptive multiple importance sampling
- A zeroth order method for stochastic weakly convex optimization
- Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates
- Truncated Cauchy random perturbations for smoothed functional-based stochastic optimization
- First- and second-order high probability complexity bounds for trust-region methods with noisy oracles
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
- Adaptive state-dependent diffusion for derivative-free optimization
- Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses
- Worst case complexity bounds for linesearch-type derivative-free algorithms
- A derivative-free optimization algorithm for the efficient minimization of functions obtained via statistical averaging
- An empirical quantile estimation approach for chance-constrained nonlinear optimization problems
- Coupled learning enabled stochastic programming with endogenous uncertainty
- Derivative-free trust region optimization for robust well control under geological uncertainty
- Derivative-free optimization of expensive functions with computational error using weighted regression
- Optimization based on non-commutative maps
- ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization
- Expected complexity analysis of stochastic direct-search
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- Stochastic trust-region and direct-search methods: a weak tail bound condition and reduced sample sizing
This page was built for publication: Stochastic derivative-free optimization using a trust region framework
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q301671)