Self equivalence of the alternating direction method of multipliers
From MaRDI portal
Publication:5350477
Abstract: The alternating direction method of multipliers (ADM or ADMM) breaks a complex optimization problem into much simpler subproblems. The ADM algorithms are typically short and easy to implement yet exhibit (nearly) state-of-the-art performance for large-scale optimization problems. To apply ADM, we first formulate a given problem into the "ADM-ready" form, so the final algorithm depends on the formulation. A problem like has six different "ADM-ready" formulations. They can be in the primal or dual forms, and they differ by how dummy variables are introduced. To each "ADM-ready" formulation, ADM can be applied in two different orders depending on how the primal variables are updated. Finally, we get twelve different ADM algorithms! How do they compare to each other? Which algorithm should one choose? In this paper, we show that many of the different ways of applying ADM are equivalent. Specifically, we show that ADM applied to a primal formulation is equivalent to ADM applied to its Lagrange dual; ADM is equivalent to a primal-dual algorithm applied to the saddle-point formulation of the same problem. These results are surprising since the primal and dual variables in ADM are seemingly treated very differently, and some previous work exhibit preferences in one over the other on specific problems. In addition, when one of the two objective functions is quadratic, possibly subject to an affine constraint, we show that swapping the update order of the two primal variables in ADM gives the same algorithm. These results identify the few truly different ADM algorithms for a problem, which generally have different forms of subproblems from which it is easy to pick one with the most computationally friendly subproblems.
Recommendations
- Understanding the convergence of the alternating direction method of multipliers: theoretical and computational perspectives
- On the global and linear convergence of the generalized alternating direction method of multipliers
- Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems
- scientific article; zbMATH DE number 679861
- ADMM for multiaffine constrained optimization
Cited in
(22)- Chordal decomposition in operator-splitting methods for sparse semidefinite programs
- A primal-dual flow for affine constrained convex optimization
- Accelerated stochastic Peaceman-Rachford method for empirical risk minimization
- The operator splitting schemes revisited: primal-dual gap and degeneracy reduction by a unified analysis
- Alternating direction method for separable variables under pair-wise constraints
- On the order of the operators in the Douglas-Rachford algorithm
- A generalized forward-backward splitting operator: degenerate analysis and applications
- Global convergence of ADMM in nonconvex nonsmooth optimization
- Proximal Splitting Algorithms for Convex Optimization: A Tour of Recent Advances, with New Twists
- A proximal point analysis of the preconditioned alternating direction method of multipliers
- Relative-error approximate versions of Douglas-Rachford splitting and special cases of the ADMM
- Equivalent resolvents of Douglas-Rachford splitting and other operator splitting algorithms: a unified degenerate proximal point analysis
- A parallel Douglas-Rachford algorithm for minimizing ROF-like functionals on images with values in symmetric Hadamard manifolds
- Conic optimization via operator splitting and homogeneous self-dual embedding
- Acceleration of primal-dual methods by preconditioning and simple subproblem procedures
- Douglas-Rachford splitting and ADMM for pathological convex optimization
- ARock: an algorithmic framework for asynchronous parallel coordinate updates
- Split-Douglas-Rachford algorithm for composite monotone inclusions and split-ADMM
- Douglas--Rachford Splitting and ADMM for Nonconvex Optimization: Tight Convergence Results
- Douglas-Rachford splitting and ADMM for nonconvex optimization: accelerated and Newton-type linesearch algorithms
- On the global and linear convergence of the generalized alternating direction method of multipliers
- A new primal-dual algorithm for minimizing the sum of three functions with a linear operator
This page was built for publication: Self equivalence of the alternating direction method of multipliers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5350477)