Pages that link to "Item:Q2978646"
From MaRDI portal
The following pages link to Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations (Q2978646):
Displaying 36 items.
- Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises (Q1616222) (← links)
- On the convergence rate issues of general Markov search for global minimum (Q1685583) (← links)
- An accelerated directional derivative method for smooth stochastic convex optimization (Q2029381) (← links)
- A stochastic subspace approach to gradient-free optimization in high dimensions (Q2044475) (← links)
- A zeroth order method for stochastic weakly convex optimization (Q2057220) (← links)
- A new one-point residual-feedback oracle for black-box learning and control (Q2063773) (← links)
- Model-free linear quadratic regulator (Q2094032) (← links)
- Distributed online bandit optimization under random quantization (Q2097746) (← links)
- Noisy zeroth-order optimization for non-smooth saddle point problems (Q2104286) (← links)
- One-point gradient-free methods for smooth and non-smooth saddle-point problems (Q2117626) (← links)
- Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference (Q2137043) (← links)
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization (Q2143221) (← links)
- Zeroth-order algorithms for stochastic distributed nonconvex optimization (Q2151863) (← links)
- Gradient-free distributed optimization with exact convergence (Q2165968) (← links)
- On the upper bound for the expectation of the norm of a vector uniformly distributed on the sphere and the phenomenon of concentration of uniform measure on the sphere (Q2282831) (← links)
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case (Q2397263) (← links)
- Decentralized online convex optimization based on signs of relative states (Q2665168) (← links)
- Personalized optimization with user's feedback (Q2665405) (← links)
- Improved complexities for stochastic conditional gradient methods under interpolation-like conditions (Q2670499) (← links)
- Zeroth-order feedback optimization for cooperative multi-agent systems (Q2682294) (← links)
- Exact optimization: Part I (Q2687241) (← links)
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points (Q2696568) (← links)
- Finite Difference Gradient Approximation: To Randomize or Not? (Q5057983) (← links)
- Zeroth-order optimization with orthogonal random directions (Q6038668) (← links)
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems (Q6053598) (← links)
- Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact (Q6060544) (← links)
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs (Q6060563) (← links)
- A Zeroth-Order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization (Q6066421) (← links)
- A gradient‐free distributed optimization method for convex sum of nonconvex cost functions (Q6069332) (← links)
- Direct Search Based on Probabilistic Descent in Reduced Spaces (Q6071887) (← links)
- Distributed Nash equilibrium learning: A second‐order proximal algorithm (Q6092410) (← links)
- Optimistic optimisation of composite objective with exponentiated update (Q6097136) (← links)
- Federated learning for minimizing nonsmooth convex loss functions (Q6112869) (← links)
- Re-thinking high-dimensional mathematical statistics. Abstracts from the workshop held May 15--21, 2022 (Q6115552) (← links)
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization (Q6175706) (← links)
- Online distributed dual averaging algorithm for multi-agent bandit optimization over time-varying general directed networks (Q6180222) (← links)