Adaptive Gradient-Free Method for Stochastic Optimization
From MaRDI portal
Publication:5054162
DOI10.1007/978-3-030-92711-0_7OpenAlexW4206782482MaRDI QIDQ5054162
Pavel Dvurechensky, Kamil Safin, Alexander V. Gasnikov
Publication date: 29 November 2022
Published in: Communications in Computer and Information Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-030-92711-0_7
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
- A heuristic adaptive fast gradient method in stochastic optimization problems
- Choice of finite-difference schemes in solving coefficient inverse problems
- Stochastic global optimization.
- Efficiency of the Accelerated Coordinate Descent Method on Structured Optimization Problems
- Introduction to Derivative-Free Optimization
- Introduction to Stochastic Search and Optimization
- Acceleration of Global Search by Implementing Dual Estimates for Lipschitz Constant
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
This page was built for publication: Adaptive Gradient-Free Method for Stochastic Optimization