Small errors in random zeroth-order optimization are imaginary
From MaRDI portal
Publication:6580001
Recommendations
Cites work
- scientific article; zbMATH DE number 1817650 (Why is no real title available?)
- scientific article; zbMATH DE number 5285446 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 1022658 (Why is no real title available?)
- scientific article; zbMATH DE number 1972910 (Why is no real title available?)
- scientific article; zbMATH DE number 852360 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- scientific article; zbMATH DE number 7753344 (Why is no real title available?)
- A Simplex Method for Function Minimization
- A new one-point residual-feedback oracle for black-box learning and control
- A one-bit, comparison-based gradient estimator
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Beautiful differentiation
- Complex-step derivative approximation in noisy environment
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- Derivative-free and blackbox optimization
- Derivative-free methods for policy optimization: guarantees for linear quadratic systems
- Derivative-free optimization methods
- Do you trust derivatives or differences?
- Evaluating Derivatives
- Finite Difference Gradient Approximation: To Randomize or Not?
- First-order methods of smooth convex optimization with inexact oracle
- Five stages of accepting constructive mathematics
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- Improved exploitation of higher order smoothness in derivative-free optimization
- Introduction to Derivative-Free Optimization
- Introduction to Smooth Manifolds
- Julia: a fresh approach to numerical computing
- Minimax efficient finite-difference stochastic gradient estimators using black-box function evaluations
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- Numerical Differentiation of Analytic Functions
- Numerical computing with IEEE floating point arithmetic. Incl. one theorem, one rule of thumb, and one hundred and one exercises
- On the accuracy of the complex-step-finite-difference method
- On the numerical performance of finite-difference-based methods for derivative-free optimization
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Optimal order of accuracy of search algorithms in stochastic optimization
- Optimization of convex functions with random pursuit
- Practical mathematical optimization. Basic optimization theory and gradient-based algorithms
- Random gradient-free minimization of convex functions
- Smooth Optimization with Approximate Gradient
- Stochastic Estimation of the Maximum of a Regression Function
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic Zeroth-Order Riemannian Derivative Estimation and Optimization
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- The complex step approximation to the Fréchet derivative of a matrix function
- The complex-step derivative approximation
- Using Complex Variables to Estimate Derivatives of Real Functions
- Using multicomplex variables for automatic computation of high-order derivatives
- When is a Function that Satisfies the Cauchy-Riemann Equations Analytic?
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
Cited in
(2)
This page was built for publication: Small errors in random zeroth-order optimization are imaginary
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6580001)