A fully stochastic primal-dual algorithm

From MaRDI portal
Publication:828693

DOI10.1007/S11590-020-01614-YzbMATH Open1466.90058arXiv1901.08170OpenAlexW3042700844MaRDI QIDQ828693FDOQ828693

Adil Salim, Pascal Bianchi, W. Hachem

Publication date: 5 May 2021

Published in: Optimization Letters (Search for Journal in Brave)

Abstract: A new stochastic primal--dual algorithm for solving a composite optimization problem is proposed. It is assumed that all the functions/operators that enter the optimization problem are given as statistical expectations. These expectations are unknown but revealed across time through i.i.d. realizations. The proposed algorithm is proven to converge to a saddle point of the Lagrangian function. In the framework of the monotone operator theory, the convergence proof relies on recent results on the stochastic Forward Backward algorithm involving random monotone operators. An example of convex optimization under stochastic linear constraints is considered.


Full work available at URL: https://arxiv.org/abs/1901.08170





Cites Work


Cited In (4)






This page was built for publication: A fully stochastic primal-dual algorithm

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q828693)