Random gradient-free minimization of convex functions
DOI10.1007/S10208-015-9296-2zbMATH Open1380.90220OpenAlexW2149479912MaRDI QIDQ2397749FDOQ2397749
Publication date: 23 May 2017
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10208-015-9296-2
Recommendations
- Random algorithms for convex minimization problems
- Random minibatch subgradient algorithms for convex problems with functional constraints
- Random convex programs
- Optimization of convex functions with random pursuit
- Random Coordinate Descent Methods for <inline-formula> <tex-math notation="TeX">$\ell_{0}$</tex-math></inline-formula> Regularized Convex Optimization
- An optimal randomized incremental gradient method
- A simple randomised algorithm for convex optimisation
- Convergence of a random algorithm for function optimization
- On the Global Convergence of Randomized Coordinate Gradient Descent for Nonconvex Optimization
- A variational approach to stochastic minimization of convex functionals
optimizationstochastic optimizationconvex functionsderivative-free methodscomplexity boundsrandom methods
Numerical mathematical programming methods (65K05) Convex programming (90C25) Analysis of algorithms and problem complexity (68Q25)
Cites Work
- Convergence Properties of the Nelder--Mead Simplex Method in Low Dimensions
- A Simplex Method for Function Minimization
- Introductory lectures on convex optimization. A basic course.
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Title not available (Why is that?)
- Optimization and nonsmooth analysis
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Online convex optimization in the bandit setting: gradient descent without a gradient
- Title not available (Why is that?)
- Solving convex programs by random walks
- Introduction to Derivative-Free Optimization
- Lexicographic differentiation of nonsmooth functions
- Convergence of the restricted Nelder-Mead algorithm in two dimensions
- Expected number of steps of a random optimization method
- Random optimization
- On the convergence of the Baba and Dorea random optimization methods
- Algorithms for approximate calculation of the minimum of a convex function from its values
- Stochastic Convex Optimization with Bandit Feedback
Cited In (only showing first 100 items - show all)
- Gradient and diagonal Hessian approximations using quadratic interpolation models and aligned regular bases
- Asynchronous gossip-based gradient-free method for multiagent optimization
- A trust region method for noisy unconstrained optimization
- Noisy zeroth-order optimization for non-smooth saddle point problems
- Minimization Algorithms for Functions with Random Noise
- Variable metric random pursuit
- Accelerated directional search with non-Euclidean prox-structure
- Randomized Iterative Methods for Linear Systems
- Linesearch Newton-CG methods for convex optimization with noise
- One-point gradient-free methods for smooth and non-smooth saddle-point problems
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- A mixed finite differences scheme for gradient approximation
- First-order methods for convex optimization
- Direct Search Based on Probabilistic Descent
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- Zeroth-order methods for noisy Hölder-gradient functions
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- Worst case complexity of direct search under convexity
- Scalable subspace methods for derivative-free nonlinear least-squares optimization
- Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains
- A derivative-free trust-region algorithm for composite nonsmooth optimization
- Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference
- Zeroth-order optimization with orthogonal random directions
- A Zeroth-Order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization
- Zeroth-order algorithms for nonconvex-strongly-concave minimax problems with improved complexities
- Inverse reinforcement learning in contextual MDPs
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- An accelerated directional derivative method for smooth stochastic convex optimization
- Gradient-free method for nonsmooth distributed optimization
- Efficient Convex Optimization with Oracles
- Trust-region methods without using derivatives: worst case complexity and the nonsmooth case
- Accelerated gradient-free optimization methods with a non-Euclidean proximal operator
- A simple randomised algorithm for convex optimisation
- Nash equilibrium seeking in \(N\)-coalition games via a gradient-free method
- A stochastic subspace approach to gradient-free optimization in high dimensions
- Asymptotically Exact Data Augmentation: Models, Properties, and Algorithms
- Distributed online bandit optimization under random quantization
- Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
- Adaptive Tikhonov strategies for stochastic ensemble Kalman inversion
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- Accelerating reinforcement learning with a directional-Gaussian-smoothing evolution strategy
- A zeroth order method for stochastic weakly convex optimization
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- Oracle complexity separation in convex optimization
- Zeroth-order algorithms for stochastic distributed nonconvex optimization
- A new one-point residual-feedback oracle for black-box learning and control
- Robustness and averaging properties of a large-amplitude, high-frequency extremum seeking control scheme
- An Improved Unconstrained Approach for Bilevel Optimization
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- Riemannian barycentres of Gibbs distributions: new results on concentration and convexity in compact symmetric spaces
- The recursive variational Gaussian approximation (R-VGA)
- Revisiting the ODE method for recursive algorithms: fast convergence using quasi stochastic approximation
- Superquantiles at work: machine learning applications and efficient subgradient computation
- Title not available (Why is that?)
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Gradient-free distributed optimization with exact convergence
- Recent Theoretical Advances in Non-Convex Optimization
- A New Likelihood Ratio Method for Training Artificial Neural Networks
- Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact
- Zeroth-order feedback optimization for cooperative multi-agent systems
- Unadjusted Langevin algorithm for sampling a mixture of weakly smooth potentials
- A geometric integration approach to nonsmooth, nonconvex optimisation
- Efficient unconstrained black box optimization
- Derivative-free optimization methods
- On the computation of equilibria in monotone and potential stochastic hierarchical games
- Stochastic Three Points Method for Unconstrained Smooth Minimization
- Perturbed iterate SGD for Lipschitz continuous loss functions
- Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks
- An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
- Constrained Optimization in the Presence of Noise
- Leveraging randomized smoothing for optimal control of nonsmooth dynamical systems
- Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization
- Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling
- Spanning attack: reinforce black-box attacks with unlabeled data
- Stochastic trust-region and direct-search methods: a weak tail bound condition and reduced sample sizing
- A Review of Adversarial Attack and Defense for Classification Methods
- Title not available (Why is that?)
- Linearly convergent adjoint free solution of least squares problems by random descent
- High probability complexity bounds for adaptive step search based on stochastic oracles
- No-regret learning for repeated non-cooperative games with lossy bandits
- Global optimization using random embeddings
- Small errors in random zeroth-order optimization are imaginary
- Convergence guarantees for forward gradient descent in the linear regression model
- Nonsmooth optimization by Lie bracket approximations into random directions
- A Smoothing Direct Search Method for Monte Carlo-Based Bound Constrained Composite Nonsmooth Optimization
- A derivative-free nonlinear least squares solver for nonsmooth functions
- Application of optimization methods in solving the problem of optimal control of assets and liabilities by a bank
- Smoothed functional-based gradient algorithms for off-policy reinforcement learning: a non-asymptotic viewpoint
- A Noise-Tolerant Quasi-Newton Algorithm for Unconstrained Optimization
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- Full-low evaluation methods for derivative-free optimization
- First- and second-order high probability complexity bounds for trust-region methods with noisy oracles
- On the global complexity of a derivative-free Levenberg-Marquardt algorithm via orthogonal spherical smoothing
- A Supervised Learning Approach Involving Active Subspaces for an Efficient Genetic Algorithm in High-Dimensional Optimization Problems
- Dimension Free Nonasymptotic Bounds on the Accuracy of High-Dimensional Laplace Approximation
- A gradient‐free distributed optimization method for convex sum of nonconvex cost functions
- Worst-case evaluation complexity of a derivative-free quadratic regularization method
- Block coordinate type methods for optimization and learning
- Expected decrease for derivative-free algorithms using random subspaces
- New First-Order Algorithms for Stochastic Variational Inequalities
This page was built for publication: Random gradient-free minimization of convex functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2397749)