Combining Lagrangian decomposition and excessive gap smoothing technique for solving large-scale separable convex optimization problems
From MaRDI portal
(Redirected from Publication:354625)
Abstract: A new algorithm for solving large-scale convex optimization problems with a separable objective function is proposed. The basic idea is to combine three techniques: Lagrangian dual decomposition, excessive gap and smoothing. The main advantage of this algorithm is that it dynamically updates the smoothness parameters which leads to numerically robust performance. The convergence of the algorithm is proved under weak conditions imposed on the original problem. The rate of convergence is , where is the iteration counter. In the second part of the paper, the algorithm is coupled with a dual scheme to construct a switching variant of the dual decomposition. We discuss implementation issues and make a theoretical comparison. Numerical examples confirm the theoretical results.
Recommendations
- An interior-point smoothing technique for Lagrangian relaxation in large-scale convex programming†
- An inexact perturbed path-following method for Lagrangian decomposition in large-scale separable convex optimization
- A new technique for nonconvex primal-dual decomposition of a large-scale separable optimization problem
- Fast inexact decomposition algorithms for large-scale separable convex optimization
- Double smoothing technique for large-scale linearly constrained convex optimization
- Extrapolated smoothing descent algorithm for constrained nonconvex and nonsmooth composite problems
- Decomposition for structured convex programs with smooth multiplier methods
- scientific article; zbMATH DE number 3320828
- A variant nonmonotone smoothing algorithm with improved numerical results for large-scale LWCPS
- A primal majorized semismooth Newton-CG augmented Lagrangian method for large-scale linearly constrained convex programming
Cites work
- scientific article; zbMATH DE number 51132 (Why is no real title available?)
- A Lagrangian dual method with self-concordant barriers for multi-stage stochastic convex programming
- A Parallel Algorithm for a Class of Convex Programs
- A primal-dual infeasible-interior-point algorithm for linear programming
- A proximal-based deomposition method for compositions method for convex minimization problems
- Alternating Projection-Proximal Methods for Convex Programming and Variational Inequalities
- Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities
- An alternating direction-based contraction method for linearly constrained separable convex programming problems
- An inexact perturbed path-following method for Lagrangian decomposition in large-scale separable convex optimization
- Application of a Smoothing Technique to Decomposition in Convex Optimization
- Applications of the method of partial inverses to convex programming: Decomposition
- Benchmarking optimization software with performance profiles.
- Decentralized Resource Allocation in Dynamic Networks of Agents
- Decomposition techniques in mathematical programming. Engineering and science applications.
- Distributed Spectrum Management Algorithms for Multiuser DSL Networks
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Experiments with primal - dual decomposition and subgradient methods for the uncapacitatied facility location problem
- Fast multiple-splitting algorithms for convex optimization
- Faster and Simpler Algorithms for Multicommodity Flow and Other Fractional Packing Problems
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Improved Dual Decomposition Based Optimization for DSL Dynamic Spectrum Management
- Introductory lectures on convex optimization. A basic course.
- Mean value cross decomposition for nonlinear convex problems
- On Convergence of an Augmented Lagrangian Decomposition Method for Sparse Convex Optimization
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- On the Implementation of a Primal-Dual Interior Point Method
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming
- Smooth minimization of non-smooth functions
- Two-level primal-dual proximal decomposition technique to solve large scale optimization problems
Cited in
(18)- An inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraints
- Linearized generalized ADMM-based algorithm for multi-block linearly constrained separable convex programming in real-world applications
- A partially parallel splitting method for multiple-block separable convex programming with applications to robust PCA
- Solving nearly-separable quadratic optimization problems as nonsmooth equations
- Fast inexact decomposition algorithms for large-scale separable convex optimization
- A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm
- An alternating trust region algorithm for distributed linearly constrained nonlinear programs, application to the optimal power flow problem
- A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints
- An inexact perturbed path-following method for Lagrangian decomposition in large-scale separable convex optimization
- Path-following gradient-based decomposition algorithms for separable convex optimization
- AAR-based decomposition algorithm for non-linear convex optimisation
- A first-order smoothing technique for a class of large-scale linear programs
- Nesterov's smoothing and excessive gap methods for an optimization problem in VLSI placement
- Successive smoothing algorithm for solving large-scale optimization models with fixed cost
- Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
- Parallel subgradient algorithm with block dual decomposition for large-scale optimization
- Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization
- Adaptive smoothing algorithms for nonsmooth composite convex minimization
This page was built for publication: Combining Lagrangian decomposition and excessive gap smoothing technique for solving large-scale separable convex optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q354625)