Finite Difference Gradient Approximation: To Randomize or Not?
From MaRDI portal
Publication:5057983
DOI10.1287/ijoc.2022.1218OpenAlexW4289767001WikidataQ114058177 ScholiaQ114058177MaRDI QIDQ5057983
Publication date: 1 December 2022
Published in: INFORMS Journal on Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1287/ijoc.2022.1218
Related Items
Uses Software
Cites Work
- Unnamed Item
- Spearmint
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- Random gradient-free minimization of convex functions
- Adaptive stochastic approximation by the simultaneous perturbation method
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Estimating Derivatives of Noisy Simulations
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling
- Derivative-free optimization methods
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming