A theoretical and empirical comparison of gradient approximations in derivative-free optimization
From MaRDI portal
Publication:2143221
DOI10.1007/s10208-021-09513-zzbMath1493.90233arXiv1905.01332OpenAlexW3163478863WikidataQ114228267 ScholiaQ114228267MaRDI QIDQ2143221
Katya Scheinberg, Krzysztof Choromanski, Liyuan Cao, Albert S. Berahas
Publication date: 31 May 2022
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1905.01332
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Derivative-free methods and methods using generalized derivatives (90C56)
Related Items (17)
Full-low evaluation methods for derivative-free optimization ⋮ Adaptive Gradient-Free Method for Stochastic Optimization ⋮ Finite Difference Gradient Approximation: To Randomize or Not? ⋮ Zeroth-order methods for noisy Hölder-gradient functions ⋮ An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization ⋮ Adaptive Finite-Difference Interval Estimation for Noisy Derivative-Free Optimization ⋮ A mixed finite differences scheme for gradient approximation ⋮ Scalable subspace methods for derivative-free nonlinear least-squares optimization ⋮ Zeroth-order optimization with orthogonal random directions ⋮ A trust region method for noisy unconstrained optimization ⋮ Zeroth-order single-loop algorithms for nonconvex-linear minimax problems ⋮ Accelerated gradient methods with absolute and relative noise in the gradient ⋮ Quadratic regularization methods with finite-difference gradient approximations ⋮ Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization ⋮ Unifying framework for accelerated randomized methods in convex optimization ⋮ Recent Theoretical Advances in Non-Convex Optimization ⋮ A Noise-Tolerant Quasi-Newton Algorithm for Unconstrained Optimization
Uses Software
Cites Work
- Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization
- Sample size selection in optimization methods for machine learning
- On lower bounds for tail probabilities
- More test examples for nonlinear programming codes
- An accelerated directional derivative method for smooth stochastic convex optimization
- Random gradient-free minimization of convex functions
- Geometry of interpolation sets in derivative free optimization
- Stochastic simulation: Algorithms and analysis
- Adaptive stochastic approximation by the simultaneous perturbation method
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Estimating Computational Noise
- Geometry of sample sets in derivative-free optimization: polynomial regression and underdetermined interpolation
- ORBIT: Optimization by Radial Basis Function Interpolation in Trust-Regions
- Introduction to Stochastic Search and Optimization
- ASTRO-DF: A Class of Adaptive Sampling Trust-Region Algorithms for Derivative-Free Stochastic Optimization
- On Sampling Rates in Simulation-Based Recursions
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling
- Benchmarking Derivative-Free Optimization Algorithms
- On the Global Convergence of Trust Region Algorithms Using Inexact Gradient Information
- A Stochastic Line Search Method with Expected Complexity Analysis
- Derivative-free optimization methods
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- An Introduction to Matrix Concentration Inequalities
- Stochastic Estimation of the Maximum of a Regression Function
- Benchmarking optimization software with performance profiles.
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: A theoretical and empirical comparison of gradient approximations in derivative-free optimization