Pages that link to "Item:Q2397749"
From MaRDI portal
The following pages link to Random gradient-free minimization of convex functions (Q2397749):
Displaying 50 items.
- Variable metric random pursuit (Q263217) (← links)
- Gradient and diagonal Hessian approximations using quadratic interpolation models and aligned regular bases (Q820723) (← links)
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models (Q1646566) (← links)
- On the information-adaptive variants of the ADMM: an iteration complexity perspective (Q1668725) (← links)
- Asynchronous gossip-based gradient-free method for multiagent optimization (Q1724545) (← links)
- A derivative-free trust-region algorithm for composite nonsmooth optimization (Q2013620) (← links)
- Gradient-free method for nonsmooth distributed optimization (Q2018475) (← links)
- An accelerated directional derivative method for smooth stochastic convex optimization (Q2029381) (← links)
- A stochastic subspace approach to gradient-free optimization in high dimensions (Q2044475) (← links)
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives (Q2052165) (← links)
- Accelerating reinforcement learning with a directional-Gaussian-smoothing evolution strategy (Q2055215) (← links)
- A zeroth order method for stochastic weakly convex optimization (Q2057220) (← links)
- A new one-point residual-feedback oracle for black-box learning and control (Q2063773) (← links)
- Nash equilibrium seeking in \(N\)-coalition games via a gradient-free method (Q2063781) (← links)
- Robustness and averaging properties of a large-amplitude, high-frequency extremum seeking control scheme (Q2063785) (← links)
- Riemannian barycentres of Gibbs distributions: new results on concentration and convexity in compact symmetric spaces (Q2064251) (← links)
- The recursive variational Gaussian approximation (R-VGA) (Q2066753) (← links)
- Revisiting the ODE method for recursive algorithms: fast convergence using quasi stochastic approximation (Q2070010) (← links)
- Superquantiles at work: machine learning applications and efficient subgradient computation (Q2070410) (← links)
- Inverse reinforcement learning in contextual MDPs (Q2071371) (← links)
- Unadjusted Langevin algorithm for sampling a mixture of weakly smooth potentials (Q2083423) (← links)
- Linesearch Newton-CG methods for convex optimization with noise (Q2084588) (← links)
- A geometric integration approach to nonsmooth, nonconvex optimisation (Q2088134) (← links)
- Perturbed iterate SGD for Lipschitz continuous loss functions (Q2093279) (← links)
- Distributed online bandit optimization under random quantization (Q2097746) (← links)
- Noisy zeroth-order optimization for non-smooth saddle point problems (Q2104286) (← links)
- One-point gradient-free methods for smooth and non-smooth saddle-point problems (Q2117626) (← links)
- Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference (Q2137043) (← links)
- Oracle complexity separation in convex optimization (Q2139268) (← links)
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization (Q2143221) (← links)
- Efficient unconstrained black box optimization (Q2146451) (← links)
- Zeroth-order algorithms for stochastic distributed nonconvex optimization (Q2151863) (← links)
- Zeroth-order methods for noisy Hölder-gradient functions (Q2162695) (← links)
- Gradient-free distributed optimization with exact convergence (Q2165968) (← links)
- Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization (Q2209727) (← links)
- Spanning attack: reinforce black-box attacks with unlabeled data (Q2217425) (← links)
- Smoothed functional-based gradient algorithms for off-policy reinforcement learning: a non-asymptotic viewpoint (Q2242923) (← links)
- Accelerated gradient-free optimization methods with a non-Euclidean proximal operator (Q2289040) (← links)
- Accelerated directional search with non-Euclidean prox-structure (Q2290400) (← links)
- Incremental gradient-free method for nonsmooth distributed optimization (Q2411165) (← links)
- Minimax efficient finite-difference stochastic gradient estimators using black-box function evaluations (Q2661588) (← links)
- Improved complexities for stochastic conditional gradient methods under interpolation-like conditions (Q2670499) (← links)
- A mixed finite differences scheme for gradient approximation (Q2671431) (← links)
- Zeroth-order feedback optimization for cooperative multi-agent systems (Q2682294) (← links)
- Bound-constrained global optimization of functions with low effective dimensionality using multiple random embeddings (Q2687068) (← links)
- Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs (Q2693641) (← links)
- On the computation of equilibria in monotone and potential stochastic hierarchical games (Q2693642) (← links)
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points (Q2696568) (← links)
- Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case (Q2826817) (← links)
- A Smoothing Direct Search Method for Monte Carlo-Based Bound Constrained Composite Nonsmooth Optimization (Q3174787) (← links)