A first-order stochastic primal-dual algorithm with correction step
From MaRDI portal
Publication:5357019
Abstract: We investigate the convergence properties of a stochastic primal-dual splitting algorithm for solving structured monotone inclusions involving the sum of a cocoercive operator and a composite monotone operator. The proposed method is the stochastic extension to monotone inclusions of a proximal method studied in {em Y. Drori, S. Sabach, and M. Teboulle, A simple algorithm for a class of nonsmooth convex-concave saddle-point problems, 2015} and {em I. Loris and C. Verhoeven, On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty, 2011} for saddle point problems. It consists in a forward step determined by the stochastic evaluation of the cocoercive operator, a backward step in the dual variables involving the resolvent of the monotone operator, and an additional forward step using the stochastic evaluation of the cocoercive introduced in the first step. We prove weak almost sure convergence of the iterates by showing that the primal-dual sequence generated by the method is stochastic quasi Fej'er-monotone with respect to the set of zeros of the considered primal and dual inclusions. Additional results on ergodic convergence in expectation are considered for the special case of saddle point models.
Recommendations
- Stochastic approximations and perturbations in forward-backward splitting for monotone operators
- Stochastic forward Douglas-Rachford splitting method for monotone inclusions
- Convergence analysis of the stochastic reflected forward-backward splitting algorithm
- Stochastic primal dual fixed point method for composite optimization
- A fully stochastic primal-dual algorithm
Cited in
(9)- Convergence analysis of the stochastic reflected forward-backward splitting algorithm
- Modified Fejér sequences and applications
- A fully stochastic primal-dual algorithm
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- An improved first-order primal-dual algorithm with a new correction step
- Variable smoothing for convex optimization problems using stochastic gradients
- A regularization interpretation of the proximal point method for weakly convex functions
- A stochastic variance reduction algorithm with Bregman distances for structured composite problems
- Convergence of stochastic proximal gradient algorithm
This page was built for publication: A first-order stochastic primal-dual algorithm with correction step
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5357019)