Just relax: convex programming methods for identifying sparse signals in noise

From MaRDI portal
Publication:3547718


DOI10.1109/TIT.2005.864420zbMath1288.94025WikidataQ59750791 ScholiaQ59750791MaRDI QIDQ3547718

Joel A. Tropp

Publication date: 21 December 2008

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)


90C25: Convex programming

94A12: Signal theory (characterization, reconstruction, filtering, etc.)

94A13: Detection theory in information and communication theory


Related Items

An adaptive inverse scale space method for compressed sensing, A New Computational Method for the Sparsest Solutions to Systems of Linear Equations, Towards a Mathematical Theory of Super‐resolution, Structured sparsity through convex optimization, A general theory of concave regularization for high-dimensional sparse estimation problems, Nearly unbiased variable selection under minimax concave penalty, A unified approach to model selection and sparse recovery using regularized least squares, Locally sparse reconstruction using the \(\ell^{1,\infty}\)-norm, Best subset selection via a modern optimization lens, Relationship between the optimal solutions of least squares regularized with \(\ell_{0}\)-norm and constrained by \(k\)-sparsity, Gradient methods for minimizing composite functions, Recovery of high-dimensional sparse signals via \(\ell_1\)-minimization, ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals, Model-based multiple rigid object detection and registration in unstructured range data, The residual method for regularizing ill-posed problems, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Two are better than one: fundamental parameters of frame coherence, Convergence of fixed-point continuation algorithms for matrix rank minimization, A coordinate gradient descent method for \(\ell_{1}\)-regularized convex minimization, Fixed point and Bregman iterative methods for matrix rank minimization, A semidefinite programming study of the Elfving theorem, Adaptive algorithms for sparse system identification, Approximation accuracy, gradient methods, and error bound for structured convex optimization, Dualization of signal recovery problems, Exact optimization for the \(\ell ^{1}\)-compressive sensing problem using a modified Dantzig-Wolfe method, Autoregressive process modeling via the Lasso procedure, On verifiable sufficient conditions for sparse signal recovery via \(\ell_{1}\) minimization, Direct data domain STAP using sparse representation of clutter spectrum, Registration-based compensation using sparse representation in conformal-array STAP, Solve exactly an under determined linear system by minimizing least squares regularized with an \(\ell_0\) penalty, A necessary and sufficient condition for exact sparse recovery by \(\ell_1\) minimization, Inferring stable genetic networks from steady-state data, Spectral dynamics and regularization of incompletely and irregularly measured data, Regularized learning in Banach spaces as an optimization problem: representer theorems, Iterative thresholding for sparse approximations, Atoms of all channels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms, Enhancing sparsity by reweighted \(\ell _{1}\) minimization, High-dimensional variable selection, High-dimensional analysis of semidefinite relaxations for sparse principal components, Non-convex sparse regularisation, A primal dual active set with continuation algorithm for the \(\ell^0\)-regularized optimization problem, Optimal dual certificates for noise robustness bounds in compressive sensing, On the conditioning of random subdictionaries, Sparse approximate solution of partial differential equations, High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression, When do stepwise algorithms meet subset selection criteria?, Lasso-type recovery of sparse representations for high-dimensional data, Relaxed maximum a posteriori fault identification, Average performance of the approximation in a dictionary using an \(\ell _0\) objective, Mixed linear system estimation and identification, Beyond canonical dc-optimization: the single reverse polar problem, High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence, Homogeneous penalizers and constraints in convex image restoration, An evaluation of the sparsity degree for sparse recovery with deterministic measurement matrices, Stable restoration and separation of approximately sparse signals, Iterative reweighted noninteger norm regularizing SVM for gene expression data classification, A numerical exploration of compressed sampling recovery, Alternating direction method of multipliers for solving dictionary learning models, Robust computation of linear models by convex relaxation, A modified Newton projection method for \(\ell _1\)-regularized least squares image deblurring, Support union recovery in high-dimensional multivariate regression, Consistency of \(\ell_1\) recovery from noisy deterministic measurements, Sparsity- and continuity-promoting seismic image recovery with curvelet frames, Rodeo: Sparse, greedy nonparametric regression, Randomized pick-freeze for sparse Sobol indices estimation in high dimension, Duality and Convex Programming, Resolution Analysis of Imaging with $\ell_1$ Optimization, Low Complexity Regularization of Linear Inverse Problems, Proximal Splitting Methods in Signal Processing, Linearized Bregman iterations for compressed sensing, Necessary and sufficient conditions for linear convergence of ℓ1-regularization, Average Performance of the Sparsest Approximation Using a General Dictionary, A component lasso