A fully stochastic primal-dual algorithm
From MaRDI portal
Publication:828693
DOI10.1007/S11590-020-01614-YzbMATH Open1466.90058arXiv1901.08170OpenAlexW3042700844MaRDI QIDQ828693FDOQ828693
Adil Salim, Pascal Bianchi, W. Hachem
Publication date: 5 May 2021
Published in: Optimization Letters (Search for Journal in Brave)
Abstract: A new stochastic primal--dual algorithm for solving a composite optimization problem is proposed. It is assumed that all the functions/operators that enter the optimization problem are given as statistical expectations. These expectations are unknown but revealed across time through i.i.d. realizations. The proposed algorithm is proven to converge to a saddle point of the Lagrangian function. In the framework of the monotone operator theory, the convergence proof relies on recent results on the stochastic Forward Backward algorithm involving random monotone operators. An example of convex optimization under stochastic linear constraints is considered.
Full work available at URL: https://arxiv.org/abs/1901.08170
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Variational Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A splitting algorithm for dual monotone inclusions involving cocoercive operators
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- Stochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations with Random Sweeping
- Familles d'opérateurs maximaux monotones et mesurabilite
- Stochastic approximations and perturbations in forward-backward splitting for monotone operators
- On perturbed proximal gradient algorithms
- Strong conical hull intersection property, bounded linear regularity, Jameson's property \((G)\), and error bounds in convex optimization
- A First-Order Stochastic Primal-Dual Algorithm with Correction Step
- On the interchange of subdifferentiation and conditional expectation for convex functionals
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization
- Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping. II: Mean-square and linear convergence
- Ergodic convergence of a stochastic proximal point algorithm
- A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
- Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates
Cited In (4)
- Primal-dual mirror descent method for constraint stochastic optimization problems
- A stochastic primal-dual method for a class of nonconvex constrained optimization
- Primal and dual linear decision rules in stochastic and robust optimization
- Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints
This page was built for publication: A fully stochastic primal-dual algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q828693)