A Lagrange-Newton algorithm for sparse nonlinear programming
From MaRDI portal
Abstract: The sparse nonlinear programming (SNP) problem has wide applications in signal and image processing, machine learning, pattern recognition, finance and management, etc. However, the computational challenge posed by SNP has not yet been well resolved due to the nonconvex and discontinuous -norm involved. In this paper, we resolve this numerical challenge by developing a fast Newton-type algorithm. As a theoretical cornerstone, we establish a first-order optimality condition for SNP based on the concept of strong -Lagrangian stationarity via the Lagrangian function, and reformulate it as a system of nonlinear equations called the Lagrangian equations. The nonsingularity of the corresponding Jacobian is discussed, based on which the Lagrange-Newton algorithm (LNA) is then proposed. Under mild conditions, we establish the locally quadratic convergence and the iterative complexity estimation of LNA. To further demonstrate the efficiency and superiority of our proposed algorithm, we apply LNA to solve two specific application problems arising from compressed sensing and sparse high-order portfolio selection, in which significant benefits accrue from the restricted Newton step in LNA.
Recommendations
- Optimality conditions for sparse nonlinear programming
- Sparsity constrained nonlinear optimization: optimality conditions and algorithms
- A Regularized Newton Method for \({\boldsymbol{\ell}}_{q}\) -Norm Composite Optimization Problems
- A sparse nonlinear optimization algorithm
- Newton method for \(\ell_0\)-regularized optimization
Cites work
- scientific article; zbMATH DE number 1163113 (Why is no real title available?)
- scientific article; zbMATH DE number 6982922 (Why is no real title available?)
- scientific article; zbMATH DE number 7370529 (Why is no real title available?)
- A convergent iterative hard thresholding for nonnegative sparsity optimization
- A null-space-based weightedl1minimization approach to compressed sensing
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- An efficient optimization approach for a cardinality-constrained index tracking problem
- An interior-point method for large-scale \(l_1\)-regularized logistic regression
- CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- Complexity of unconstrained \(L_2 - L_p\) minimization
- Compressed sensing
- Constraint qualifications and optimality conditions for optimization problems with cardinality constraints
- DC formulations and algorithms for sparse optimization problems
- Gradient Pursuits
- Hard thresholding pursuit: an algorithm for compressive sensing
- Iterative hard thresholding for compressed sensing
- Manifold elastic net: a unified framework for sparse dimension reduction
- Minimization of \(\ell_{1-2}\) for compressed sensing
- On solutions of sparsity constrained optimization
- On the minimization over sparse symmetric sets: projections, optimality conditions, and algorithms
- Optimal cardinality constrained portfolio selection
- Optimality conditions for sparse nonlinear programming
- Restart procedures for the conjugate gradient method
- Restricted Robinson constraint qualification and optimality for cardinality-constrained cone programming
- Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
- Solving the OSCAR and SLOPE models using a semismooth Newton-based augmented Lagrangian method
- Sparse Approximate Solutions to Linear Systems
- Sparse Approximation via Penalty Decomposition Methods
- Sparse and redundant representations. From theory to applications in signal and image processing.
- Sparsity constrained nonlinear optimization: optimality conditions and algorithms
- Subspace Pursuit for Compressive Sensing Signal Reconstruction
- Superlinearly convergent variable metric algorithms for general nonlinear programming problems
- The sparse principal component analysis problem: optimality conditions and algorithms
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(10)- A conjugate gradient algorithm for sparse linear inequalities
- A Lagrange–Newton algorithm for tensor sparse principal component analysis
- Greedy Gauss-Newton algorithms for finding sparse solutions to nonlinear underdetermined systems of equations
- Penalty method for the sparse portfolio optimization problem
- Normal Cones Intersection Rule and Optimality Analysis for Low-Rank Matrix Optimization with Affine Manifolds
- Solving Large Sparse Nonlinear Programs Using GRG
- A greedy Newton-type method for multiple sparse constraint problem
- Sparse nonnegative solution of underdetermined linear equations by linear programming
- An algorithm for solving sparse nonlinear least squares problems
- The sparse(st) optimization problem: reformulations, optimality, stationarity, and numerical results
This page was built for publication: A Lagrange-Newton algorithm for sparse nonlinear programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2089792)