An optimal randomized incremental gradient method
From MaRDI portal
Abstract: In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the summation of () smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization problems using a primal-dual termination criterion. Our major contribution is to develop a randomized primal-dual gradient (RPDG) method, which needs to compute the gradient of only one randomly selected smooth component at each iteration, but can possibly achieve better complexity than PDG in terms of the total number of gradient evaluations. More specifically, we show that the total number of gradient evaluations performed by RPDG can be times smaller, both in expectation and with high probability, than those performed by deterministic optimal first-order methods under favorable situations. We also show that the complexity of the RPDG method is not improvable by developing a new lower complexity bound for a general class of randomized methods for solving large-scale finite-sum convex optimization problems. Moreover, through the development of PDG and RPDG, we introduce a novel game-theoretic interpretation for these optimal methods for convex optimization.
Recommendations
- Convergence rate of incremental subgradient algorithms
- Random algorithms for convex minimization problems
- Incremental proximal methods for large scale convex optimization
- Incremental subgradient methods for nondifferentiable optimization
- An incremental mirror descent subgradient algorithm with random sweeping and proximal step
Cites work
- scientific article; zbMATH DE number 1667417 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 3296905 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- An accelerated randomized proximal coordinate gradient method and its application to regularized empirical risk minimization
- An optimal method for stochastic composite optimization
- Bregman Monotone Optimization Algorithms
- Catalyst acceleration for first-order convex optimization: from theory to practice
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Erratum to: ``Minimizing finite sums with the stochastic average gradient
- Gradient methods for minimizing composite functions
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Introductory lectures on convex optimization. A basic course.
- On lower and upper bounds in smooth and strongly convex optimization
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal primal-dual methods for a class of saddle point problems
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Proximal Minimization Methods with Generalized Bregman Functions
- Robust Stochastic Approximation Approach to Stochastic Programming
- Smooth minimization of non-smooth functions
- Stochastic dual coordinate ascent methods for regularized loss minimization
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- Unconstrained convex minimization in relative scale
- Validation analysis of mirror descent stochastic approximation method
Cited in
(48)- Accelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock Optimization
- scientific article; zbMATH DE number 7306860 (Why is no real title available?)
- No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Accelerating incremental gradient optimization with curvature information
- Communication-efficient algorithms for decentralized and stochastic optimization
- Accelerated Bregman Primal-Dual Methods Applied to Optimal Transport and Wasserstein Barycenter Problems
- scientific article; zbMATH DE number 5883928 (Why is no real title available?)
- Network manipulation algorithm based on inexact alternating minimization
- A Markovian Incremental Stochastic Subgradient Algorithm
- A New Class of Incremental Gradient Methods for Least Squares Problems
- scientific article; zbMATH DE number 5018816 (Why is no real title available?)
- Stochastic gradient algorithm with random truncations
- String-averaging incremental stochastic subgradient algorithms
- Incrementally updated gradient methods for constrained and regularized optimization
- Catalyst acceleration for first-order convex optimization: from theory to practice
- Data-Driven Mirror Descent with Input-Convex Neural Networks
- Dynamic stochastic approximation for multi-stage stochastic optimization
- Accelerated dual-averaging primal–dual method for composite convex minimization
- Incremental proximal methods for large scale convex optimization
- scientific article; zbMATH DE number 7370566 (Why is no real title available?)
- A Gradient Complexity Analysis for Minimizing the Sum of Strongly Convex Functions with Varying Condition Numbers
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- A first order method for finding minimal norm-like solutions of convex optimization problems
- An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems
- Accelerating variance-reduced stochastic gradient methods
- Random minibatch subgradient algorithms for convex problems with functional constraints
- Random gradient extrapolation for distributed and stochastic optimization
- Random algorithms for convex minimization problems
- On the convergence of stochastic primal-dual hybrid gradient
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- A data efficient and feasible level set method for stochastic convex optimization with expectation constraints
- Nesterov's acceleration for approximate Newton
- Random gradient-free minimization of convex functions
- Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence
- A memory efficient incremental gradient method for regularized minimization
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- Unifying framework for accelerated randomized methods in convex optimization
- Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
- Oracle complexity separation in convex optimization
- On the complexity analysis of the primal solutions for the accelerated randomized dual coordinate ascent
- Value iteration for streaming data on a continuous space with gradient method in an RKHS
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Linear convergence of cyclic SAGA
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Perseus: a simple and optimal high-order method for variational inequalities
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- Optimal Methods for Convex Risk-Averse Distributed Optimization
This page was built for publication: An optimal randomized incremental gradient method
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1785198)