Does \ell _{p} -Minimization Outperform \ell _{1} -Minimization?
From MaRDI portal
Publication:4566536
DOI10.1109/TIT.2017.2717585zbMATH Open1390.94511arXiv1501.03704OpenAlexW2736716203MaRDI QIDQ4566536FDOQ4566536
Authors: Le Zheng, Arian Maleki, Haolei Weng, Teng Long, Xiaodong Wang
Publication date: 27 June 2018
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Abstract: In many application areas we are faced with the following question: Can we recover a sparse vector from its undersampled set of noisy observations , . The last decade has witnessed a surge of algorithms and theoretical results addressing this question. One of the most popular algorithms is the -regularized least squares (LPLS) given by the following formulation: [ hat{x}(gamma,p )in argmin_x frac{1}{2}|y - Ax|_2^2+gamma|x|_p^p, ] where . Despite the non-convexity of these problems for , they are still appealing because of the following folklores in compressed sensing: (i) is closer to than . (ii) If we employ iterative methods that aim to converge to a local minima of LPLS, then under good initialization these algorithms converge to a solution that is closer to than . In spite of the existence of plenty of empirical results that support these folklore theorems, the theoretical progress to establish them has been very limited. This paper aims to study the above folklore theorems and establish their scope of validity. Starting with approximate message passing algorithm as a heuristic method for solving LPLS, we study the impact of initialization on the performance of AMP. Then, we employ the replica analysis to show the connection between the solution of AMP and in the asymptotic settings. This enables us to compare the accuracy of for . In particular, we will characterize the phase transition and noise sensitivity of LPLS for every accurately. Our results in the noiseless setting confirm that LPLS exhibits the same phase transition for every and this phase transition is much higher than that of LASSO.
Full work available at URL: https://arxiv.org/abs/1501.03704
Cited In (14)
- Nonlinear frames and sparse reconstructions in Banach spaces
- LASSO risk and phase transition under dependence
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Overcoming the limitations of phase transition by higher order analysis of regularization techniques
- Asymptotic risk and phase transition of \(l_1\)-penalized robust estimator
- Neural network for a class of sparse optimization with \(L_0\)-regularization
- Rotation to sparse loadings using \(L^p\) losses and related inference problems
- Title not available (Why is that?)
- Which bridge estimator is the best for variable selection?
- The finite steps of convergence of the fast thresholding algorithms with \(f\)-feedbacks in compressed sensing
- A survey on compressive sensing: classical results and recent advancements
- Computing the degrees of freedom of rank-regularized estimators and cousins
- Matrix completion with nonconvex regularization: spectral operators and scalable algorithms
- Phase transition and higher order analysis of \(L_q\) regularization under dependence
This page was built for publication: Does $\ell _{p}$ -Minimization Outperform $\ell _{1}$ -Minimization?
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4566536)