An optimal randomized incremental gradient method
From MaRDI portal
Publication:1785198
DOI10.1007/s10107-017-1173-0zbMath1432.90115arXiv1507.02000OpenAlexW2964037929MaRDI QIDQ1785198
Publication date: 28 September 2018
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1507.02000
Related Items
Accelerated Bregman Primal-Dual Methods Applied to Optimal Transport and Wasserstein Barycenter Problems, Oracle complexity separation in convex optimization, On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent, Unnamed Item, Unnamed Item, On the Convergence of Stochastic Primal-Dual Hybrid Gradient, Accelerating incremental gradient optimization with curvature information, Linear convergence of cyclic SAGA, Optimal Methods for Convex Risk-Averse Distributed Optimization, Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization, Unnamed Item, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence, Data-Driven Mirror Descent with Input-Convex Neural Networks, An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems, Stochastic first-order methods for convex and nonconvex functional constrained optimization, Unifying framework for accelerated randomized methods in convex optimization, Accelerated dual-averaging primal–dual method for composite convex minimization, Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems, An Optimal Algorithm for Decentralized Finite-Sum Optimization, Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice, Dynamic stochastic approximation for multi-stage stochastic optimization, Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems, Communication-efficient algorithms for decentralized and stochastic optimization, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, Accelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock Optimization, Unnamed Item, Unnamed Item, Network manipulation algorithm based on inexact alternating minimization, Accelerating variance-reduced stochastic gradient methods
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Gradient methods for minimizing composite functions
- An optimal method for stochastic composite optimization
- Erratum to: ``Minimizing finite sums with the stochastic average gradient
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- Validation analysis of mirror descent stochastic approximation method
- Introductory lectures on convex optimization. A basic course.
- A first-order primal-dual algorithm for convex problems with applications to imaging
- On Lower and Upper Bounds for Smooth and Strongly Convex Optimization Problems
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Unconstrained Convex Minimization in Relative Scale
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Proximal Minimization Methods with Generalized Bregman Functions
- Bregman Monotone Optimization Algorithms
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization