Pages that link to "Item:Q431018"
From MaRDI portal
The following pages link to An optimal method for stochastic composite optimization (Q431018):
Displaying 45 items.
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression (Q4637017) (← links)
- Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent (Q4638051) (← links)
- Random Gradient Extrapolation for Distributed and Stochastic Optimization (Q4687240) (← links)
- Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent (Q4969072) (← links)
- (Q4969260) (← links)
- Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming (Q4976160) (← links)
- Accelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method (Q4993286) (← links)
- An Optimal High-Order Tensor Method for Convex Optimization (Q5026443) (← links)
- An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization (Q5081777) (← links)
- Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems (Q5085148) (← links)
- Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent (Q5094616) (← links)
- (Q5096506) (← links)
- On the Adaptivity of Stochastic Gradient-Based Optimization (Q5114394) (← links)
- (Q5148937) (← links)
- Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints (Q5158760) (← links)
- Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity (Q5233106) (← links)
- Computing the Best Approximation over the Intersection of a Polyhedral Set and the Doubly Nonnegative Cone (Q5242932) (← links)
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning (Q5254990) (← links)
- A Multilevel Proximal Gradient Algorithm for a Class of Composite Optimization Problems (Q5372650) (← links)
- Two stochastic optimization algorithms for convex optimization with fixed point constraints (Q5379458) (← links)
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization (Q5737735) (← links)
- Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions (Q5853717) (← links)
- Inexact model: a framework for optimization and variational inequalities (Q5865338) (← links)
- Universal intermediate gradient method for convex problems with inexact oracle (Q5865342) (← links)
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs (Q5870771) (← links)
- Research on three-step accelerated gradient algorithm in deep learning (Q5880102) (← links)
- A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates (Q5882231) (← links)
- Block coordinate type methods for optimization and learning (Q5889894) (← links)
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization (Q5962719) (← links)
- Subgradient ellipsoid method for nonsmooth convex problems (Q6038646) (← links)
- Unifying mirror descent and dual averaging (Q6038659) (← links)
- Semi-discrete optimal transport: hardness, regularization and numerical solution (Q6038666) (← links)
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems (Q6051310) (← links)
- Nonconvex optimization with inertial proximal stochastic variance reduction gradient (Q6052662) (← links)
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems (Q6053598) (← links)
- Automatic, dynamic, and nearly optimal learning rate specification via local quadratic approximation (Q6054924) (← links)
- An overview of stochastic quasi-Newton methods for large-scale machine learning (Q6097379) (← links)
- A mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize (Q6097380) (← links)
- Optimal Algorithms for Stochastic Complementary Composite Minimization (Q6136660) (← links)
- Block mirror stochastic gradient method for stochastic optimization (Q6158991) (← links)
- Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction (Q6166650) (← links)
- First-order methods for convex optimization (Q6169988) (← links)
- Data-Driven Mirror Descent with Input-Convex Neural Networks (Q6171691) (← links)
- Adaptive proximal SGD based on new estimating sequences for sparser ERM (Q6196471) (← links)
- Accelerated gradient methods for sparse statistical learning with nonconvex penalties (Q6494394) (← links)