On the Behavior of the Douglas--Rachford Algorithm for Minimizing a Convex Function Subject to a Linear Constraint

From MaRDI portal
Publication:4971015

DOI10.1137/19M1281538zbMATH Open1451.90117arXiv1908.05406OpenAlexW3088059505MaRDI QIDQ4971015FDOQ4971015


Authors: Heinz H. Bauschke, Walaa M. Moursi Edit this on Wikidata


Publication date: 8 October 2020

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Abstract: The Douglas-Rachford algorithm (DRA) is a powerful optimization method for minimizing the sum of two convex (not necessarily smooth) functions. The vast majority of previous research dealt with the case when the sum has at least one minimizer. In the absence of minimizers, it was recently shown that for the case of two indicator functions, the DRA converges to a best approximation solution. In this paper, we present a new convergence result on the the DRA applied to the problem of minimizing a convex function subject to a linear constraint. Indeed, a normal solution may be found even when the domain of the objective function and the linear subspace constraint have no point in common. As an important application, a new parallel splitting result is provided. We also illustrate our results through various examples.


Full work available at URL: https://arxiv.org/abs/1908.05406




Recommendations




Cites Work


Cited In (10)





This page was built for publication: On the Behavior of the Douglas--Rachford Algorithm for Minimizing a Convex Function Subject to a Linear Constraint

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4971015)