Random gradient-free minimization of convex functions
From MaRDI portal
Publication:2397749
DOI10.1007/s10208-015-9296-2zbMath1380.90220OpenAlexW2149479912MaRDI QIDQ2397749
Yu. E. Nesterov, Vladimir Spokoiny
Publication date: 23 May 2017
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10208-015-9296-2
optimizationstochastic optimizationrandom methodsconvex functionscomplexity boundsderivative-free methods
Analysis of algorithms and problem complexity (68Q25) Numerical mathematical programming methods (65K05) Convex programming (90C25)
Related Items
Gradient and diagonal Hessian approximations using quadratic interpolation models and aligned regular bases, On the numerical performance of finite-difference-based methods for derivative-free optimization, Full-low evaluation methods for derivative-free optimization, Block coordinate type methods for optimization and learning, Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference, New First-Order Algorithms for Stochastic Variational Inequalities, Oracle complexity separation in convex optimization, A Smoothing Direct Search Method for Monte Carlo-Based Bound Constrained Composite Nonsmooth Optimization, A theoretical and empirical comparison of gradient approximations in derivative-free optimization, Finite Difference Gradient Approximation: To Randomize or Not?, Randomized Iterative Methods for Linear Systems, Efficient unconstrained black box optimization, Adaptive Tikhonov strategies for stochastic ensemble Kalman inversion, Global convergence rate analysis of unconstrained optimization methods based on probabilistic models, Zeroth-order algorithms for stochastic distributed nonconvex optimization, Asymptotically Exact Data Augmentation: Models, Properties, and Algorithms, Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains, Zeroth-Order Stochastic Compositional Algorithms for Risk-Aware Learning, Tracking and Regret Bounds for Online Zeroth-Order Euclidean and Riemannian Optimization, Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling, Zeroth-order methods for noisy Hölder-gradient functions, Gradient-free distributed optimization with exact convergence, An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization, On the information-adaptive variants of the ADMM: an iteration complexity perspective, A New Likelihood Ratio Method for Training Artificial Neural Networks, Minimax efficient finite-difference stochastic gradient estimators using black-box function evaluations, Incremental gradient-free method for nonsmooth distributed optimization, Improved complexities for stochastic conditional gradient methods under interpolation-like conditions, A mixed finite differences scheme for gradient approximation, Scalable subspace methods for derivative-free nonlinear least-squares optimization, Zeroth-order optimization with orthogonal random directions, A trust region method for noisy unconstrained optimization, Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact, Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs, Dimension Free Nonasymptotic Bounds on the Accuracy of High-Dimensional Laplace Approximation, Zeroth-order algorithms for nonconvex-strongly-concave minimax problems with improved complexities, A Zeroth-Order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization, A gradient‐free distributed optimization method for convex sum of nonconvex cost functions, Robust design optimization for enhancing delamination resistance of composites, Direct Search Based on Probabilistic Descent in Reduced Spaces, An Improved Unconstrained Approach for Bilevel Optimization, Accelerated gradient methods with absolute and relative noise in the gradient, Stochastic search for a parametric cost function approximation: energy storage with rolling forecasts, Smoothing unadjusted Langevin algorithms for nonsmooth composite potential functions, A unified analysis of stochastic gradient‐free Frank–Wolfe methods, Zeroth-order feedback optimization for cooperative multi-agent systems, Federated learning for minimizing nonsmooth convex loss functions, Unnamed Item, Effective stabilized self-training on few-labeled graph data, Pathological Subgradient Dynamics, Bound-constrained global optimization of functions with low effective dimensionality using multiple random embeddings, Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs, On the computation of equilibria in monotone and potential stochastic hierarchical games, Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points, Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization, Spanning attack: reinforce black-box attacks with unlabeled data, Asynchronous gossip-based gradient-free method for multiagent optimization, Efficient Convex Optimization with Oracles, Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods, Smoothed functional-based gradient algorithms for off-policy reinforcement learning: a non-asymptotic viewpoint, Unnamed Item, Tuning of multivariable model predictive controllers through expert bandit feedback, A derivative-free trust-region algorithm for composite nonsmooth optimization, Gradient-free method for nonsmooth distributed optimization, An accelerated directional derivative method for smooth stochastic convex optimization, Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization, Worst case complexity of direct search under convexity, Accelerated gradient-free optimization methods with a non-Euclidean proximal operator, Accelerated directional search with non-Euclidean prox-structure, A stochastic subspace approach to gradient-free optimization in high dimensions, Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem, Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives, Stochastic Three Points Method for Unconstrained Smooth Minimization, Accelerating reinforcement learning with a directional-Gaussian-smoothing evolution strategy, A zeroth order method for stochastic weakly convex optimization, Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case, A new one-point residual-feedback oracle for black-box learning and control, Nash equilibrium seeking in \(N\)-coalition games via a gradient-free method, Robustness and averaging properties of a large-amplitude, high-frequency extremum seeking control scheme, Riemannian barycentres of Gibbs distributions: new results on concentration and convexity in compact symmetric spaces, Derivative-free optimization methods, The recursive variational Gaussian approximation (R-VGA), Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks, Revisiting the ODE method for recursive algorithms: fast convergence using quasi stochastic approximation, Superquantiles at work: machine learning applications and efficient subgradient computation, Inverse reinforcement learning in contextual MDPs, Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise, Unadjusted Langevin algorithm for sampling a mixture of weakly smooth potentials, Linesearch Newton-CG methods for convex optimization with noise, Direct Search Based on Probabilistic Descent, A geometric integration approach to nonsmooth, nonconvex optimisation, A Supervised Learning Approach Involving Active Subspaces for an Efficient Genetic Algorithm in High-Dimensional Optimization Problems, Perturbed iterate SGD for Lipschitz continuous loss functions, Unnamed Item, Distributed online bandit optimization under random quantization, Noisy zeroth-order optimization for non-smooth saddle point problems, A Noise-Tolerant Quasi-Newton Algorithm for Unconstrained Optimization, One-point gradient-free methods for smooth and non-smooth saddle-point problems, Variable metric random pursuit, Linearly convergent adjoint free solution of least squares problems by random descent, No-regret learning for repeated non-cooperative games with lossy bandits, Global optimization using random embeddings, Nonsmooth optimization by Lie bracket approximations into random directions, First-order methods for convex optimization, Quadratic regularization methods with finite-difference gradient approximations, Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization, Constrained Optimization in the Presence of Noise, Worst-case evaluation complexity of a derivative-free quadratic regularization method, Unifying framework for accelerated randomized methods in convex optimization, Recent theoretical advances in decentralized distributed convex optimization, Recent Theoretical Advances in Non-Convex Optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Algorithms for approximate calculation of the minimum of a convex function from its values
- Introductory lectures on convex optimization. A basic course.
- On the convergence of the Baba and Dorea random optimization methods
- Random optimization
- Lexicographic differentiation of nonsmooth functions
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Convergence of the Restricted Nelder--Mead Algorithm in Two Dimensions
- Introduction to Derivative-Free Optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Optimization and nonsmooth analysis
- Convergence Properties of the Nelder--Mead Simplex Method in Low Dimensions
- Stochastic Convex Optimization with Bandit Feedback
- A Simplex Method for Function Minimization
- Solving convex programs by random walks
- Expected number of steps of a random optimization method