Theoretical and numerical comparison of first-order algorithms for cocoercive equations and smooth convex optimization

From MaRDI portal
Publication:6358359

arXiv2101.06152MaRDI QIDQ6358359FDOQ6358359


Authors: Luis M. Briceño-Arias, Nelly Pustelnik Edit this on Wikidata


Publication date: 15 January 2021

Abstract: This paper provides a theoretical and numerical comparison of classical first-order splitting methods for solving smooth convex optimization problems and cocoercive equations. From a theoretical point of view, we compare convergence rates of gradient descent, forward-backward, Peaceman-Rachford, and Douglas-Rachford algorithms for minimizing the sum of two smooth convex functions when one of them is strongly convex. A similar comparison is given in the more general cocoercive setting under the presence of strong monotonicity and we observe that the convergence rates in optimization are strictly better than the corresponding rates for cocoercive equations for some algorithms. We obtain improved rates with respect to the literature in several instances by exploiting the structure of our problems. Moreover, we indicate which algorithm has the lowest convergence rate depending on strong convexity and cocoercive parameters. From a numerical point of view, we verify our theoretical results by implementing and comparing previous algorithms in well-established signal and image inverse problems involving sparsity. We replace the widely used ell1 norm with the Huber loss and we observe that fully proximal-based strategies have numerical and theoretical advantages with respect to methods using gradient steps.













This page was built for publication: Theoretical and numerical comparison of first-order algorithms for cocoercive equations and smooth convex optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6358359)