The Proximal Augmented Lagrangian Method for Nonsmooth Composite Optimization
From MaRDI portal
Publication:5223796
Abstract: We study a class of optimization problems in which the objective function is given by the sum of a differentiable but possibly nonconvex component and a nondifferentiable convex regularization term. We introduce an auxiliary variable to separate the objective function components and utilize the Moreau envelope of the regularization term to derive the proximal augmented Lagrangian a continuously differentiable function obtained by constraining the augmented Lagrangian to the manifold that corresponds to the explicit minimization over the variable in the nonsmooth term. The continuous differentiability of this function with respect to both primal and dual variables allows us to leverage the method of multipliers (MM) to compute optimal primal-dual pairs by solving a sequence of differentiable problems. The MM algorithm is applicable to a broader class of problems than proximal gradient methods and it has stronger convergence guarantees and a more refined step-size update rules than the alternating direction method of multipliers. These features make it an attractive option for solving structured optimal control problems. We also develop an algorithm based on the primal-descent dual-ascent gradient method and prove global (exponential) asymptotic stability when the differentiable component of the objective function is (strongly) convex and the regularization term is convex. Finally, we identify classes of problems for which the primal-dual gradient flow dynamics are convenient for distributed implementation and compare/contrast our framework to the existing approaches.
Cited in
(21)- Constrained composite optimization and augmented Lagrangian methods
- Fast and stable nonconvex constrained distributed optimization: the ELLADA algorithm
- An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
- Analytical convergence regions of accelerated gradient descent in nonconvex optimization under regularity condition
- Exponential stability of partial primal-dual gradient dynamics with nonsmooth objective functions
- Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
- Tracking control by the Newton–Raphson method with output prediction and controller speedup
- Local R-linear convergence of ADMM-based algorithm for \(\ell_1\)-norm minimization with linear and box constraints
- Semi-global exponential stability of augmented primal-dual gradient dynamics for constrained convex optimization
- Local properties and augmented Lagrangians in fully nonconvex composite optimization
- Augmented Lagrangian duality for composite optimization problems
- Linear convergence of primal-dual gradient methods and their performance in distributed optimization
- An accelerated proximal augmented Lagrangian method and its application in compressive sensing
- Solving a class of nonsmooth resource allocation problems with directed graphs through distributed Lipschitz continuous multi-proximal algorithms
- Image multiplicative denoising using adaptive Euler's elastica as the regularization
- Distributed optimization of high-order nonlinear multi-agent systems with disturbance under switching topologies
- Convergence rate bounds for the mirror descent method: IQCs, Popov criterion and Bregman divergence
- Distributed coordination for nonsmooth convex optimization via saddle-point dynamics
- A proximal augmented method for semidefinite programming problems
- On a primal-dual Newton proximal method for convex quadratic programs
- Dynamical systems coupled with monotone set-valued operators: formalisms, applications, well-posedness, and stability
This page was built for publication: The Proximal Augmented Lagrangian Method for Nonsmooth Composite Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5223796)