The Sparse(st) Optimization Problem: Reformulations, Optimality, Stationarity, and Numerical Results
From MaRDI portal
Publication:6414271
arXiv2210.09589MaRDI QIDQ6414271FDOQ6414271
Authors: Christian Kanzow, Alexandra Schwarz, Felix Weiss
Publication date: 18 October 2022
Abstract: We consider the sparse optimization problem with nonlinear constraints and an objective function, which is given by the sum of a general smooth mapping and an additional term defined by the -quasi-norm. This term is used to obtain sparse solutions, but difficult to handle due to its nonconvexity and nonsmoothness (the sparsity-improving term is even discontinuous). The aim of this paper is to present two reformulations of this program as a smooth nonlinear program with complementarity-type constraints. We show that these programs are equivalent in terms of local and global minima and introduce a problem-tailored stationarity concept, which turns out to coincide with the standard KKT conditions of the two reformulated problems. In addition, a suitable constraint qualification as well as second-order conditions for the sparse optimization problem are investigated. These are then used to show that three Lagrange-Newton-type methods are locally fast convergent. Numerical results on different classes of test problems indicate that these methods can be used to drastically improve sparse solutions obtained by some other (globally convergent) methods for sparse optimization problems.
This page was built for publication: The Sparse(st) Optimization Problem: Reformulations, Optimality, Stationarity, and Numerical Results
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6414271)