Pages that link to "Item:Q5408211"
From MaRDI portal
The following pages link to Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms (Q5408211):
Displaying 50 items.
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming (Q263185) (← links)
- Stochastic forward-backward splitting for monotone inclusions (Q289110) (← links)
- Gradient sliding for composite optimization (Q312670) (← links)
- New results on subgradient methods for strongly convex optimization problems with a unified analysis (Q316174) (← links)
- A sparsity preserving stochastic gradient methods for sparse regression (Q457215) (← links)
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions (Q507334) (← links)
- Minimizing finite sums with the stochastic average gradient (Q517295) (← links)
- Stochastic intermediate gradient method for convex problems with stochastic inexact oracle (Q727222) (← links)
- On the information-adaptive variants of the ADMM: an iteration complexity perspective (Q1668725) (← links)
- Inexact proximal stochastic gradient method for convex composite optimization (Q1694394) (← links)
- Inexact SA method for constrained stochastic convex SDP and application in Chinese stock market (Q1709750) (← links)
- Conditional gradient type methods for composite nonlinear and stochastic optimization (Q1717236) (← links)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise (Q1739038) (← links)
- An optimal randomized incremental gradient method (Q1785198) (← links)
- Convergence of stochastic proximal gradient algorithm (Q2019902) (← links)
- Dynamic stochastic approximation for multi-stage stochastic optimization (Q2020613) (← links)
- Inexact stochastic subgradient projection method for stochastic equilibrium problems with nonmonotone bifunctions: application to expected risk minimization in machine learning (Q2045021) (← links)
- Stochastic approximation method using diagonal positive-definite matrices for convex optimization with fixed point constraints (Q2138441) (← links)
- A stochastic subgradient method for distributionally robust non-convex and non-smooth learning (Q2159458) (← links)
- Algorithms for stochastic optimization with function or expectation constraints (Q2181600) (← links)
- Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants (Q2282819) (← links)
- Communication-efficient algorithms for decentralized and stochastic optimization (Q2297648) (← links)
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate (Q2301128) (← links)
- Generalized uniformly optimal methods for nonlinear programming (Q2316202) (← links)
- Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization (Q2515032) (← links)
- Stochastic intermediate gradient method for convex optimization problems (Q2631196) (← links)
- Conditional Gradient Sliding for Convex Optimization (Q2816241) (← links)
- A family of subgradient-based methods for convex optimization problems in a unifying framework (Q2829570) (← links)
- On the Solution of Stochastic Optimization and Variational Problems in Imperfect Information Regimes (Q2832894) (← links)
- Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization (Q2945126) (← links)
- Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization (Q2954396) (← links)
- Penalty methods with stochastic approximation for stochastic nonlinear programming (Q2970100) (← links)
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity (Q4558142) (← links)
- Random Gradient Extrapolation for Distributed and Stochastic Optimization (Q4687240) (← links)
- (Q4969260) (← links)
- (Q4998940) (← links)
- Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems (Q5085148) (← links)
- (Q5148937) (← links)
- Accelerate stochastic subgradient method by leveraging local growth condition (Q5236746) (← links)
- Computing the Best Approximation over the Intersection of a Polyhedral Set and the Doubly Nonnegative Cone (Q5242932) (← links)
- Two stochastic optimization algorithms for convex optimization with fixed point constraints (Q5379458) (← links)
- Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions (Q5853717) (← links)
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs (Q5870771) (← links)
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization (Q5962719) (← links)
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems (Q6051310) (← links)
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization (Q6116247) (← links)
- Optimal Algorithms for Stochastic Complementary Composite Minimization (Q6136660) (← links)
- Theoretical analysis of Adam using hyperparameters close to one without Lipschitz smoothness (Q6145578) (← links)
- Nonlinear Gradient Mappings and Stochastic Optimization: A General Framework with Applications to Heavy-Tail Noise (Q6155875) (← links)
- Block mirror stochastic gradient method for stochastic optimization (Q6158991) (← links)