Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization

From MaRDI portal
Revision as of 20:13, 3 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2921184

DOI10.1214/10-SSY010zbMath1297.90097OpenAlexW1980404857MaRDI QIDQ2921184

Anatoli B. Juditsky, Yu. E. Nesterov

Publication date: 7 October 2014

Full work available at URL: https://projecteuclid.org/euclid.ssy/1411044992




Related Items (39)

Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noisesGreedy strategies for convex optimizationStochastic forward-backward splitting for monotone inclusionsOSGA: a fast subgradient algorithm with optimal complexityNew results on subgradient methods for strongly convex optimization problems with a unified analysisSimple and Optimal Methods for Stochastic Variational Inequalities, II: Markovian Noise and Policy Evaluation in Reinforcement LearningAccelerated Stochastic Algorithms for Convex-Concave Saddle-Point ProblemsPrimal-dual mirror descent method for constraint stochastic optimization problemsGradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compactStochastic Block Mirror Descent Methods for Nonsmooth and Stochastic OptimizationUnnamed ItemOptimal Algorithms for Stochastic Complementary Composite MinimizationSimple and fast algorithm for binary integer and online linear programmingFirst-order methods for convex optimizationOn the Adaptivity of Stochastic Gradient-Based OptimizationUnifying framework for accelerated randomized methods in convex optimizationRecent theoretical advances in decentralized distributed convex optimizationLower complexity bounds of first-order methods for convex-concave bilinear saddle-point problemsUniversal method for stochastic composite optimization problemsSolving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\)RSG: Beating Subgradient Method without Smoothness and Strong ConvexityOn problems in the calculus of variations in increasingly elongated domainsMultistep stochastic mirror descent for risk-averse convex stochastic programs based on extended polyhedral risk measuresConvergence of stochastic proximal gradient algorithmInexact stochastic mirror descent for two-stage nonlinear stochastic programsFast gradient descent for convex minimization problems with an oracle producing a \(( \delta, L)\)-model of function at the requested pointSome worst-case datasets of deterministic first-order methods for solving binary logistic regressionAn accelerated directional derivative method for smooth stochastic convex optimizationStochastic intermediate gradient method for convex problems with stochastic inexact oracleAlgorithms of robust stochastic optimization based on mirror descent methodPrefaceAccelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexityA family of subgradient-based methods for convex optimization problems in a unifying frameworkOnline estimation of the asymptotic variance for averaged stochastic gradient algorithmsAccelerate stochastic subgradient method by leveraging local growth conditionA dual approach for optimal algorithms in distributed optimization over networksA Stochastic Variance Reduced Primal Dual Fixed Point Method for Linearly Constrained Separable OptimizationNoisy zeroth-order optimization for non-smooth saddle point problemsUniversal intermediate gradient method for convex problems with inexact oracle



Cites Work


This page was built for publication: Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization