Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
From MaRDI portal
Publication:6175706
DOI10.1007/s12532-023-00233-9zbMath1517.90164arXiv2109.12213OpenAlexW3203868469MaRDI QIDQ6175706
Stefan M. Wild, Raghu Bollapragada
Publication date: 24 July 2023
Published in: Mathematical Programming Computation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2109.12213
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Derivative-free methods and methods using generalized derivatives (90C56) Stochastic programming (90C15) Methods of quasi-Newton type (90C53)
Related Items (3)
Adaptive Gradient-Free Method for Stochastic Optimization ⋮ A quasi-Newton trust-region method for optimization under uncertainty using stochastic simplex approximate gradients ⋮ Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Stochastic derivative-free optimization using a trust region framework
- Handbook of simulation optimization
- Sample size selection in optimization methods for machine learning
- Variable-number sample-path optimization
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Stochastic optimization using a trust-region method and random models
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- Sub-sampled Newton methods
- Stochastic Nelder-Mead simplex method -- a new globally convergent direct search method for simulation optimization
- Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates
- A zeroth order method for stochastic weakly convex optimization
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- Newton-type methods for non-convex optimization under inexact Hessian information
- Optimization with hidden constraints and embedded Monte Carlo computations
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Random gradient-free minimization of convex functions
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Estimating Derivatives of Noisy Simulations
- Estimating Computational Noise
- A Smoothing Direct Search Method for Monte Carlo-Based Bound Constrained Composite Nonsmooth Optimization
- Budget-Dependent Convergence Rate of Stochastic Approximation
- ASTRO-DF: A Class of Adaptive Sampling Trust-Region Algorithms for Derivative-Free Stochastic Optimization
- Adaptive Sampling Strategies for Stochastic Optimization
- Derivative-Free and Blackbox Optimization
- On Sampling Rates in Simulation-Based Recursions
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- Optimization Methods for Large-Scale Machine Learning
- Simulation-Based Optimization with Stochastic Approximation Using Common Random Numbers
- A robust multi-batch L-BFGS method for machine learning
- Benchmarking Derivative-Free Optimization Algorithms
- Analysis of the BFGS Method with Errors
- Derivative-free optimization methods
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- CUTEr and SifDec
- Stochastic Estimation of the Maximum of a Regression Function
- A Stochastic Approximation Method
- Multidimensional Stochastic Approximation Methods
- Exact and inexact subsampled Newton methods for optimization
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
This page was built for publication: Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization