A modified self-adaptive dual ascent method with relaxed stepsize condition for linearly constrained quadratic convex optimization
From MaRDI portal
Publication:2691343
DOI10.3934/jimo.2022101OpenAlexW4285270432MaRDI QIDQ2691343
Publication date: 29 March 2023
Published in: Journal of Industrial and Management Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.3934/jimo.2022101
variational inequalitydual ascent methodaugmented Lagrangian algorithmself-adaptive stepsizequadratic optimization with linear constraints
Lua error in Module:PublicationMSCList at line 37: attempt to index local 'msc_result' (a nil value).
Cites Work
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Fast linearized Bregman iteration for compressive sensing and sparse denoising
- A sequential updating scheme of the Lagrange multiplier for separable convex programming
- Accelerated Uzawa methods for convex optimization
- Linearized Bregman iterations for compressed sensing
- A Singular Value Thresholding Algorithm for Matrix Completion
- Linearized Bregman Iterations for Frame-Based Image Deblurring
- The Split Bregman Method for L1-Regularized Problems
- Lagrangian dual coordinatewise maximization algorithm for network transportation problems with quadratic costs
- A lagrangean relaxation algorithm for the constrained matrix problem
- Relaxation Methods for Network Flow Problems with Convex Arc Costs
- Computational development of a lagrangian dual approach for quadratic networks
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- Convex programming in Hilbert space
This page was built for publication: A modified self-adaptive dual ascent method with relaxed stepsize condition for linearly constrained quadratic convex optimization