Random gradient-free minimization of convex functions
From MaRDI portal
Publication:2397749
Recommendations
- Random algorithms for convex minimization problems
- Random minibatch subgradient algorithms for convex problems with functional constraints
- Random convex programs
- Optimization of convex functions with random pursuit
- Random Coordinate Descent Methods for <inline-formula> <tex-math notation="TeX">$\ell_{0}$</tex-math></inline-formula> Regularized Convex Optimization
- An optimal randomized incremental gradient method
- A simple randomised algorithm for convex optimisation
- Convergence of a random algorithm for function optimization
- On the Global Convergence of Randomized Coordinate Gradient Descent for Nonconvex Optimization
- A variational approach to stochastic minimization of convex functionals
Cites work
- scientific article; zbMATH DE number 4164577 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 5485582 (Why is no real title available?)
- A Simplex Method for Function Minimization
- Algorithms for approximate calculation of the minimum of a convex function from its values
- Convergence Properties of the Nelder--Mead Simplex Method in Low Dimensions
- Convergence of the restricted Nelder-Mead algorithm in two dimensions
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Expected number of steps of a random optimization method
- Introduction to Derivative-Free Optimization
- Introductory lectures on convex optimization. A basic course.
- Lexicographic differentiation of nonsmooth functions
- On the convergence of the Baba and Dorea random optimization methods
- Online convex optimization in the bandit setting: gradient descent without a gradient
- Optimization and nonsmooth analysis
- Random optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Solving convex programs by random walks
- Stochastic convex optimization with bandit feedback
Cited in
(only showing first 100 items - show all)- A derivative-free nonlinear least squares solver for nonsmooth functions
- Application of optimization methods in solving the problem of optimal control of assets and liabilities by a bank
- Block coordinate type methods for optimization and learning
- New first-order algorithms for stochastic variational inequalities
- Improved complexities for stochastic conditional gradient methods under interpolation-like conditions
- scientific article; zbMATH DE number 7415113 (Why is no real title available?)
- Zeroth-order Riemannian averaging stochastic approximation algorithms
- scientific article; zbMATH DE number 5883928 (Why is no real title available?)
- Federated learning for minimizing nonsmooth convex loss functions
- No-regret learning for repeated non-cooperative games with lossy bandits
- Full-low evaluation methods for derivative-free optimization
- Expected decrease for derivative-free algorithms using random subspaces
- Leveraging randomized smoothing for optimal control of nonsmooth dynamical systems
- Worst-case evaluation complexity of a derivative-free quadratic regularization method
- Global optimization using random embeddings
- Convergence rates for stochastic approximation: biased noise with unbounded variance, and applications
- A unified analysis of stochastic gradient‐free Frank–Wolfe methods
- One-point gradient-free methods for smooth and non-smooth saddle-point problems
- Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization
- Minimax efficient finite-difference stochastic gradient estimators using black-box function evaluations
- Stochastic search for a parametric cost function approximation: energy storage with rolling forecasts
- Smoothing unadjusted Langevin algorithms for nonsmooth composite potential functions
- Coordinate descent methods beyond smoothness and separability
- Zeroth-order stochastic compositional algorithms for risk-aware learning
- Bound-constrained global optimization of functions with low effective dimensionality using multiple random embeddings
- Quadratic regularization methods with finite-difference gradient approximations
- Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
- Retraction-based direct search methods for derivative free Riemannian optimization
- Truncated Cauchy random perturbations for smoothed functional-based stochastic optimization
- Convergence guarantees for forward gradient descent in the linear regression model
- Linearly convergent adjoint free solution of least squares problems by random descent
- First- and second-order high probability complexity bounds for trust-region methods with noisy oracles
- On the global complexity of a derivative-free Levenberg-Marquardt algorithm via orthogonal spherical smoothing
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
- Nonsmooth optimization by Lie bracket approximations into random directions
- On the numerical performance of finite-difference-based methods for derivative-free optimization
- Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference
- Finite Difference Gradient Approximation: To Randomize or Not?
- Tracking and Regret Bounds for Online Zeroth-Order Euclidean and Riemannian Optimization
- A Supervised Learning Approach Involving Active Subspaces for an Efficient Genetic Algorithm in High-Dimensional Optimization Problems
- High probability complexity bounds for adaptive step search based on stochastic oracles
- A smoothing direct search method for Monte Carlo-based bound constrained composite nonsmooth optimization
- Accelerated gradient methods with absolute and relative noise in the gradient
- Tuning of multivariable model predictive controllers through expert bandit feedback
- Dimension Free Nonasymptotic Bounds on the Accuracy of High-Dimensional Laplace Approximation
- Convergence properties of stochastic proximal subgradient method in solving a class of composite optimization problems with cardinality regularizer
- Distributed zeroth-order optimization: convergence rates that match centralized counterpart
- Nonsmooth optimization over the Stiefel manifold and beyond: proximal gradient method and recent variants
- Recent theoretical advances in decentralized distributed convex optimization
- scientific article; zbMATH DE number 7625189 (Why is no real title available?)
- Unifying framework for accelerated randomized methods in convex optimization
- Online Statistical Inference for Stochastic Optimization via Kiefer-Wolfowitz Methods
- Stochastic zeroth order descent with structured directions
- A gradient‐free distributed optimization method for convex sum of nonconvex cost functions
- Smoothed functional-based gradient algorithms for off-policy reinforcement learning: a non-asymptotic viewpoint
- A trust region method for noisy unconstrained optimization
- Noisy zeroth-order optimization for non-smooth saddle point problems
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- A noise-tolerant quasi-Newton algorithm for unconstrained optimization
- Zeroth-order regularized optimization (ZORO): approximately sparse gradients and adaptive sampling
- Effective stabilized self-training on few-labeled graph data
- A Review of Adversarial Attack and Defense for Classification Methods
- Pathological subgradient dynamics
- Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions
- Spanning attack: reinforce black-box attacks with unlabeled data
- Small errors in random zeroth-order optimization are imaginary
- Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs
- Incremental gradient-free method for nonsmooth distributed optimization
- Stochastic trust-region and direct-search methods: a weak tail bound condition and reduced sample sizing
- Zeroth-order methods for noisy Hölder-gradient functions
- Direct Search Based on Probabilistic Descent in Reduced Spaces
- Robust design optimization for enhancing delamination resistance of composites
- Perturbed iterate SGD for Lipschitz continuous loss functions
- Optimization of convex functions with random pursuit
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- Worst case complexity of direct search under convexity
- Zeroth-order feedback optimization for cooperative multi-agent systems
- A new one-point residual-feedback oracle for black-box learning and control
- Robustness and averaging properties of a large-amplitude, high-frequency extremum seeking control scheme
- Oracle complexity separation in convex optimization
- A New Likelihood Ratio Method for Training Artificial Neural Networks
- A stochastic subspace approach to gradient-free optimization in high dimensions
- Improved exploitation of higher order smoothness in derivative-free optimization
- Nash equilibrium seeking in \(N\)-coalition games via a gradient-free method
- Scalable subspace methods for derivative-free nonlinear least-squares optimization
- Zeroth-order algorithms for stochastic distributed nonconvex optimization
- scientific article; zbMATH DE number 7370642 (Why is no real title available?)
- Zeroth-order optimization with orthogonal random directions
- Unadjusted Langevin algorithm for sampling a mixture of weakly smooth potentials
- Derivative-free optimization methods
- On the computation of equilibria in monotone and potential stochastic hierarchical games
- A geometric integration approach to nonsmooth, nonconvex optimisation
- Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains
- Distributed online bandit optimization under random quantization
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- Gradient and diagonal Hessian approximations using quadratic interpolation models and aligned regular bases
- Accelerating reinforcement learning with a directional-Gaussian-smoothing evolution strategy
- A zeroth order method for stochastic weakly convex optimization
- An accelerated directional derivative method for smooth stochastic convex optimization
- Asynchronous gossip-based gradient-free method for multiagent optimization
This page was built for publication: Random gradient-free minimization of convex functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2397749)