Gradient sliding for composite optimization
From MaRDI portal
Abstract: We consider in this paper a class of composite optimization problems whose objective function is given by the summation of a general smooth and nonsmooth component, together with a relatively simple nonsmooth term. We present a new class of first-order methods, namely the gradient sliding algorithms, which can skip the computation of the gradient for the smooth component from time to time. As a consequence, these algorithms require only gradient evaluations for the smooth component in order to find an -solution for the composite problem, while still maintaining the optimal bound on the total number of subgradient evaluations for the nonsmooth component. We then present a stochastic counterpart for these algorithms and establish similar complexity bounds for solving an important class of stochastic composite optimization problems. Moreover, if the smooth component in the composite function is strongly convex, the developed gradient sliding algorithms can significantly reduce the number of graduate and subgradient evaluations for the smooth and nonsmooth component to and , respectively. Finally, we generalize these algorithms to the case when the smooth component is replaced by a nonsmooth one possessing a certain bi-linear saddle point structure.
Recommendations
- Accelerated gradient sliding for structured convex optimization
- Gradient methods for minimizing composite functions
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization
- A smoothing stochastic gradient method for composite optimization
- Conditional gradient sliding for convex optimization
Cites work
- scientific article; zbMATH DE number 6253954 (Why is no real title available?)
- scientific article; zbMATH DE number 3296905 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated and inexact forward-backward algorithms
- An optimal method for stochastic composite optimization
- Bregman Monotone Optimization Algorithms
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Introductory lectures on convex optimization. A basic course.
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Proximal Minimization Methods with Generalized Bregman Functions
- Smooth minimization of non-smooth functions
- Solving variational inequalities with stochastic mirror-prox algorithm
- Sparsity and Smoothness Via the Fused Lasso
- Tensor Decompositions and Applications
- Validation analysis of mirror descent stochastic approximation method
Cited in
(26)- Efficient algorithms for distributionally robust stochastic optimization with discrete scenario support
- Communication-efficient algorithms for decentralized and stochastic optimization
- A multilevel proximal gradient algorithm for a class of composite optimization problems
- Combining search directions using gradient flows
- Accelerated gradient sliding for minimizing a sum of functions
- Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method
- On the nonergodic convergence rate of an inexact augmented Lagrangian framework for composite convex programming
- New results on subgradient methods for strongly convex optimization problems with a unified analysis
- Stochastic conditional gradient++: (Non)convex minimization and continuous submodular maximization
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Universal Conditional Gradient Sliding for Convex Optimization
- Recent theoretical advances in decentralized distributed convex optimization
- Complementary composite minimization, small gradients in general norms, and applications
- Oracle complexity separation in convex optimization
- A dual approach for optimal algorithms in distributed optimization over networks
- Adaptive mirror descent algorithms for convex and strongly convex optimization problems with functional constraints
- Gradient methods for minimizing composite functions
- Solving Stochastic Optimization with Expectation Constraints Efficiently by a Stochastic Augmented Lagrangian-Type Algorithm
- Finite-time analysis and restarting scheme for linear two-time-scale stochastic approximation
- Conditional gradient sliding for convex optimization
- A smoothing stochastic gradient method for composite optimization
- A multi-step doubly stabilized bundle method for nonsmooth convex optimization
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- Optimal Methods for Convex Risk-Averse Distributed Optimization
- Accelerated gradient sliding for structured convex optimization
This page was built for publication: Gradient sliding for composite optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q312670)