Small errors in random zeroth-order optimization are imaginary
From MaRDI portal
Publication:6580001
DOI10.1137/22M1510261zbMATH Open1544.65101MaRDI QIDQ6580001FDOQ6580001
Authors: Man-Chung Yue, Daniel Kuhn
Publication date: 29 July 2024
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Recommendations
Numerical mathematical programming methods (65K05) Derivative-free methods and methods using generalized derivatives (90C56)
Cites Work
- Julia: a fresh approach to numerical computing
- Title not available (Why is that?)
- A Simplex Method for Function Minimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Introduction to Smooth Manifolds
- Smooth Optimization with Approximate Gradient
- Evaluating Derivatives
- First-order methods of smooth convex optimization with inexact oracle
- Using Complex Variables to Estimate Derivatives of Real Functions
- Numerical Differentiation of Analytic Functions
- The complex step approximation to the Fréchet derivative of a matrix function
- Random gradient-free minimization of convex functions
- Optimization of convex functions with random pursuit
- Title not available (Why is that?)
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Introduction to Derivative-Free Optimization
- The complex-step derivative approximation
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- Stochastic Estimation of the Maximum of a Regression Function
- Optimal order of accuracy of search algorithms in stochastic optimization
- Numerical computing with IEEE floating point arithmetic. Incl. one theorem, one rule of thumb, and one hundred and one exercises
- Title not available (Why is that?)
- On the accuracy of the complex-step-finite-difference method
- Title not available (Why is that?)
- Complex-step derivative approximation in noisy environment
- Do you trust derivatives or differences?
- Using multicomplex variables for automatic computation of high-order derivatives
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- Derivative-free and blackbox optimization
- Beautiful differentiation
- Derivative-free optimization methods
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- Five stages of accepting constructive mathematics
- Improved exploitation of higher order smoothness in derivative-free optimization
- Practical mathematical optimization. Basic optimization theory and gradient-based algorithms
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- A new one-point residual-feedback oracle for black-box learning and control
- Derivative-free methods for policy optimization: guarantees for linear quadratic systems
- A one-bit, comparison-based gradient estimator
- When is a Function that Satisfies the Cauchy-Riemann Equations Analytic?
- Minimax efficient finite-difference stochastic gradient estimators using black-box function evaluations
- On the numerical performance of finite-difference-based methods for derivative-free optimization
- Finite Difference Gradient Approximation: To Randomize or Not?
- Title not available (Why is that?)
- Stochastic Zeroth-Order Riemannian Derivative Estimation and Optimization
Cited In (2)
This page was built for publication: Small errors in random zeroth-order optimization are imaginary
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6580001)