Pages that link to "Item:Q5962719"
From MaRDI portal
The following pages link to Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization (Q5962719):
Displaying 50 items.
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming (Q263185) (← links)
- On the information-adaptive variants of the ADMM: an iteration complexity perspective (Q1668725) (← links)
- Conditional gradient type methods for composite nonlinear and stochastic optimization (Q1717236) (← links)
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis (Q1734769) (← links)
- Dynamic stochastic approximation for multi-stage stochastic optimization (Q2020613) (← links)
- Stochastic proximal gradient methods for nonconvex problems in Hilbert spaces (Q2028468) (← links)
- An accelerated directional derivative method for smooth stochastic convex optimization (Q2029381) (← links)
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods (Q2031928) (← links)
- A stochastic approximation method for approximating the efficient frontier of chance-constrained nonlinear programs (Q2063194) (← links)
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization (Q2082285) (← links)
- Stochastic relaxed inertial forward-backward-forward splitting for monotone inclusions in Hilbert spaces (Q2082546) (← links)
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization (Q2086938) (← links)
- Stopping criteria for, and strong convergence of, stochastic gradient descent on Bottou-Curtis-Nocedal functions (Q2089787) (← links)
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization (Q2103421) (← links)
- On stochastic accelerated gradient with convergence rate (Q2111814) (← links)
- A hybrid stochastic optimization framework for composite nonconvex optimization (Q2118109) (← links)
- Understanding generalization error of SGD in nonconvex optimization (Q2127232) (← links)
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization (Q2149551) (← links)
- Zeroth-order algorithms for stochastic distributed nonconvex optimization (Q2151863) (← links)
- An interior stochastic gradient method for a class of non-Lipschitz optimization problems (Q2161545) (← links)
- Zeroth-order methods for noisy Hölder-gradient functions (Q2162695) (← links)
- Mini-batch learning of exponential family finite mixture models (Q2195820) (← links)
- Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis (Q2205985) (← links)
- Robust and sparse regression in generalized linear model by stochastic optimization (Q2303494) (← links)
- Generalized uniformly optimal methods for nonlinear programming (Q2316202) (← links)
- Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization (Q2679567) (← links)
- Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs (Q2693641) (← links)
- On the computation of equilibria in monotone and potential stochastic hierarchical games (Q2693642) (← links)
- Conditional Gradient Sliding for Convex Optimization (Q2816241) (← links)
- Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization (Q2945126) (← links)
- Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization (Q2954396) (← links)
- Penalty methods with stochastic approximation for stochastic nonlinear programming (Q2970100) (← links)
- Stochastic Model-Based Minimization of Weakly Convex Functions (Q4620418) (← links)
- (Q4969167) (← links)
- (Q4969178) (← links)
- (Q4969209) (← links)
- (Q4969246) (← links)
- (Q4999088) (← links)
- (Q5038021) (← links)
- Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes (Q5038180) (← links)
- (Q5054636) (← links)
- Zeroth-Order Stochastic Compositional Algorithms for Risk-Aware Learning (Q5071109) (← links)
- Stochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence Rates (Q5072589) (← links)
- Distributed Variable Sample-Size Gradient-Response and Best-Response Schemes for Stochastic Nash Equilibrium Problems (Q5072591) (← links)
- Graphical Convergence of Subgradients in Nonconvex Optimization and Learning (Q5076697) (← links)
- Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models (Q5079553) (← links)
- An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization (Q5081777) (← links)
- (Q5096506) (← links)
- Trimmed Statistical Estimation via Variance Reduction (Q5108267) (← links)
- Open Problem—Iterative Schemes for Stochastic Optimization: Convergence Statements and Limit Theorems (Q5113905) (← links)