A New Insight on Augmented Lagrangian Method with Applications in Machine Learning
From MaRDI portal
Publication:6493955
DOI10.1007/S10915-024-02518-0WikidataQ127175756 ScholiaQ127175756MaRDI QIDQ6493955
Zheng Peng, Jianchao Bai, Unnamed Author
Publication date: 29 April 2024
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Convex programming (90C25) Numerical optimization and variational techniques (65K10) Complexity and performance of numerical algorithms (65Y20)
Cites Work
- Unnamed Item
- A fast proximal point algorithm for \(\ell_{1}\)-minimization problem in compressed sensing
- A class of ADMM-based algorithms for three-block separable convex programming
- A class of customized proximal point algorithms for linearly constrained convex optimization
- A primal-dual prediction-correction algorithm for saddle point optimization
- Customized proximal point algorithms for linearly constrained convex minimization and saddle-point problems: a unified approach
- Latent semantic indexing: A probabilistic analysis
- Generalized symmetric ADMM for separable convex optimization
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Iteration complexity analysis of a partial LQP-based alternating direction method of multipliers
- An inexact accelerated stochastic ADMM for separable convex optimization
- A partially proximal S-ADMM for separable convex optimization with linear constraints
- A first-order inexact primal-dual algorithm for a class of convex-concave saddle point problems
- Two-step fixed-point proximity algorithms for multi-block separable convex problems
- Multiplier and gradient methods
- A customized proximal point algorithm for convex minimization with linear constraints
- Solving saddle point problems: a landscape of primal-dual algorithm with larger stepsizes
- A dual-primal balanced augmented Lagrangian method for linearly constrained convex programming
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Recovering Low-Rank and Sparse Components of Matrices from Incomplete and Noisy Observations
- Robust principal component analysis?
- Some continuity properties of polyhedral multifunctions
- Data-Driven Science and Engineering
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- A Generalized Primal-Dual Algorithm with Improved Convergence Condition for Saddle Point Problems
- Several Variants of the Primal-Dual Hybrid Gradient Algorithm with Applications
- A NEW MODEL FOR SPARSE AND LOW-RANK MATRIX DECOMPOSITION
- Convergence revisit on generalized symmetric ADMM
- On the Convergence of Primal-Dual Hybrid Gradient Algorithm
This page was built for publication: A New Insight on Augmented Lagrangian Method with Applications in Machine Learning