Primal-dual incremental gradient method for nonsmooth and convex optimization problems
From MaRDI portal
Publication:2230784
DOI10.1007/S11590-021-01752-XzbMATH Open1477.90063arXiv2011.02059OpenAlexW3165002395MaRDI QIDQ2230784FDOQ2230784
Authors: Afrooz Jalilzadeh
Publication date: 28 September 2021
Published in: Optimization Letters (Search for Journal in Brave)
Abstract: In this paper, we consider a nonsmooth convex finite-sum problem with a conic constraint. To overcome the challenge of projecting onto the constraint set and computing the full (sub)gradient, we introduce a primal-dual incremental gradient scheme where only a component function and two constraints are used to update each primal-dual sub-iteration in a cyclic order. We demonstrate an asymptotic sublinear rate of convergence in terms of suboptimality and infeasibility which is an improvement over the state-of-the-art incremental gradient schemes in this setting. Numerical results suggest that the proposed scheme compares well with competitive methods.
Full work available at URL: https://arxiv.org/abs/2011.02059
Recommendations
- Incremental subgradient method for nonsmooth convex optimization with fixed point constraints
- On convergence analysis of dual proximal-gradient methods with approximate gradient for a class of nonsmooth convex minimization problems
- Incremental subgradients for constrained convex optimization: A unified framework and new methods
- Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems
- Primal-dual subgradient methods for convex problems
Cites Work
- Lectures on modern convex optimization. Analysis, algorithms, and engineering applications
- Compressed sensing
- A Convergent Incremental Gradient Method with a Constant Step Size
- Title not available (Why is that?)
- Incremental subgradient methods for nondifferentiable optimization
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Algorithms for Fitting the Constrained Lasso
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems
- Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints
- A primal-dual algorithm with line search for general convex-concave saddle point problems
Cited In (10)
- Running Primal-Dual Gradient Method for Time-Varying Nonconvex Problems
- Perturbed proximal primal-dual algorithm for nonconvex nonsmooth optimization
- The Primal-Dual Hybrid Gradient Method for Semiconvex Splittings
- A dual method for minimizing a nonsmooth objective over one smooth inequality constraint
- Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems
- A primal-dual approach to inexact subgradient methods
- Inexact primal-dual gradient projection methods for nonlinear optimization on convex set
- An incremental primal-dual method for nonlinear programming with special structure
- Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
- Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints
Uses Software
This page was built for publication: Primal-dual incremental gradient method for nonsmooth and convex optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2230784)