Two parallel distribution algorithms for convex constrained minimization problems (Q884654)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Two parallel distribution algorithms for convex constrained minimization problems |
scientific article |
Statements
Two parallel distribution algorithms for convex constrained minimization problems (English)
0 references
6 June 2007
0 references
The aim of this paper is to study problems, whose objective functions are convex. A nonsmooth constrained parallel gradient distribution (PGD) algorithm is constructed based on a technique due to \textit{Y. R. He} [J. Optimization Theory Appl. 111, 137--153 (2001; Zbl 0987.90067)] where directions and stepsizes in subproblems are obtained via a certain method of solving quadratic programming and the procedure of Armijio type. This algorithm is similar in spirit to the classical block Jacobi and coordinate descent methods. Main result: The PGD algorithm is significantly different from those methods in that each subproblem in the parallelization phase needs to be solved only approximately, for example, by executing a single descent iteration. In a parallel variable distribution (PVD) algorithm, the variables are distributed among processors, each processor not only has primary responsibility for updating its own block of variables but also allows the other secondary variables to change in a restricted manner. A nonsmooth inexact PVD algorithm, in which the solutions of subproblems in the parallel step of the algorithm need not necessarily to be exact is proposed. The authors establish the convergence of this algorithm under some assumptions.
0 references
nonsmooth optimization
0 references
parallel algorithm
0 references
convex programming
0 references
Moreau-Yosida regularization
0 references
convergence
0 references
parallel gradient distribution algorithm
0 references
coordinate descent methods
0 references