Stochastic trust-region and direct-search methods: a weak tail bound condition and reduced sample sizing
From MaRDI portal
Publication:6561380
DOI10.1137/22M1543446zbMATH Open1548.90343MaRDI QIDQ6561380FDOQ6561380
Authors: F. Rinaldi, L. N. Vicente, Damiano Zeffiro
Publication date: 25 June 2024
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Recommendations
- Stochastic derivative-free optimization using a trust region framework
- Expected complexity analysis of stochastic direct-search
- Convergence of trust-region methods based on probabilistic models
- Stochastic optimization using a trust-region method and random models
- Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses
Nonlinear programming (90C30) Stochastic programming (90C15) Derivative-free methods and methods using generalized derivatives (90C56)
Cites Work
- ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization
- A Linesearch-Based Derivative-Free Approach for Nonsmooth Constrained Optimization
- Trust-region methods for the derivative-free optimization of nonsmooth black-box functions
- A Derivative-Free Algorithm for Linearly Constrained Finite Minimax Problems
- On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals
- Inequalities for the $r$th Absolute Moment of a Sum of Random Variables, $1 \leqq r \leqq 2$
- Random gradient-free minimization of convex functions
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Introduction to Derivative-Free Optimization
- Worst case complexity of direct search
- Benchmarking Derivative-Free Optimization Algorithms
- Stochastic optimization using a trust-region method and random models
- Convergence of trust-region methods based on probabilistic models
- Stochastic derivative-free optimization using a trust region framework
- Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization
- On the Global Convergence of Derivative-Free Methods for Unconstrained Optimization
- An introduction to measure theory
- A stochastic line search method with expected complexity analysis
- Probability
- Derivative-free and blackbox optimization
- The Exact Constant in the Rosenthal Inequality for Random Variables with Mean Zero
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- Expected complexity analysis of stochastic direct-search
- First-order and stochastic optimization methods for machine learning
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- Constrained stochastic blackbox optimization using a progressive barrier and probabilistic estimates
- Stochastic Zeroth-Order Riemannian Derivative Estimation and Optimization
This page was built for publication: Stochastic trust-region and direct-search methods: a weak tail bound condition and reduced sample sizing
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6561380)