Decoding by Linear Programming

From MaRDI portal
Revision as of 02:03, 5 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:3546644


DOI10.1109/TIT.2005.858979zbMath1264.94121WikidataQ98839711 ScholiaQ98839711MaRDI QIDQ3546644

Emmanuel J. Candès, Terence C. Tao

Publication date: 21 December 2008

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1109/tit.2005.858979


90C08: Special problems of linear programming (transportation, multi-index, data envelopment analysis, etc.)

94B35: Decoding


Related Items

Nearly unbiased variable selection under minimax concave penalty, Selection by partitioning the solution paths, A unified approach to model selection and sparse recovery using regularized least squares, Orthogonal one step greedy procedure for heteroscedastic linear models, Analysis of a nonsmooth optimization approach to robust estimation, Locally sparse reconstruction using the \(\ell^{1,\infty}\)-norm, On solutions of sparsity constrained optimization, The gap between the null space property and the restricted isometry property, Two-stage convex relaxation approach to least squares loss constrained low-rank plus sparsity optimization problems, Regularity properties for sparse regression, Semidefinite programming relaxation methods for global optimization problems with sparse polynomials and unbounded semialgebraic feasible sets, Operator theoretic framework for optimal placement of sensors and actuators for control of nonequilibrium dynamics, A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron, Stable signal recovery from phaseless measurements, The benefit of group sparsity in group inference with de-biased scaled group Lasso, Sparse solutions of linear complementarity problems, Compressed blind signal reconstruction model and algorithm, DC approximation approaches for sparse optimization, Coherence of sensing matrices coming from algebraic-geometric codes, Sparse high-dimensional FFT based on rank-1 lattice sampling, The non-convex sparse problem with nonnegative constraint for signal reconstruction, Matrix sparsification and the sparse null space problem, 1-bit compressive sensing: reformulation and RRSP-based sign recovery theory, Reweighted \(\ell_1\) minimization method for stochastic elliptic differential equations, A weighted \(\ell_1\)-minimization approach for sparse polynomial chaos expansions, Block sparse recovery via mixed \(l_2/l_1\) minimization, The essential ability of sparse reconstruction of different compressive sensing strategies, Influence factors of sparse microwave imaging radar system performance: approaches to waveform design and platform motion analysis, Sparse microwave imaging: principles and applications, Compressed sensing SAR imaging based on sparse representation in fractional Fourier domain, Recovery of high-dimensional sparse signals via \(\ell_1\)-minimization, The bounds of restricted isometry constants for low rank matrices recovery, Accelerated Bregman method for linearly constrained \(\ell _1-\ell _2\) minimization, \(s\)-goodness for low-rank matrix recovery, Clustering and feature selection using sparse principal component analysis, Sparsity and non-Euclidean embeddings, On exact recovery of sparse vectors from linear measurements, Theory of compressive sensing via \(\ell_1\)-minimization: a non-RIP analysis and extensions, Nearly optimal minimax estimator for high-dimensional sparse linear regression, The \(L_1\) penalized LAD estimator for high dimensional linear regression, Nonconvex compressed sampling of natural images and applications to compressed MR imaging, Recovery of sparsest signals via \(\ell^q \)-minimization, Phase transition in limiting distributions of coherence of high-dimensional random matrices, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Two are better than one: fundamental parameters of frame coherence, A novel sparsity reconstruction method from Poisson data for 3D bioluminescence tomography, 2D sparse signal recovery via 2D orthogonal matching pursuit, Perturbations of measurement matrices and dictionaries in compressed sensing, Numerically erasure-robust frames, Average best \(m\)-term approximation, Compressive sensing of analog signals using discrete prolate spheroidal sequences, New bounds for RIC in compressed sensing, Iterative reweighted minimization methods for \(l_p\) regularized unconstrained nonlinear programming, Guaranteed clustering and biclustering via semidefinite programming, Sparse regression and support recovery with \(\mathbb{L}_2\)-boosting algorithms, Stable recovery of sparse signals via \(\ell_p\)-minimization, Sparse recovery on Euclidean Jordan algebras, System identification in the presence of outliers and random noises: a compressed sensing approach, Sparse time-frequency representation of nonlinear and nonstationary data, Robustness of orthogonal matching pursuit under restricted isometry property, Analysis of orthogonal multi-matching pursuit under restricted isometry property, A sharp nonasymptotic bound and phase diagram of \(L_{1/2}\) regularization, The variational Garrote, Compressed sensing with coherent tight frames via \(l_q\)-minimization for \(0 < q \leq 1\), Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, A new perspective on least squares under convex constraint, Sparse dual frames and dual Gabor functions of minimal time and frequency supports, The road to deterministic matrices with the restricted isometry property, Equivalence and strong equivalence between the sparsest and least \(\ell _1\)-norm nonnegative solutions of linear systems and their applications, On the null space property of \(l_q\)-minimization for \(0 < q \leq 1\) in compressed sensing, Computing sparse representation in a highly coherent dictionary based on difference of \(L_1\) and \(L_2\), Extreme point inequalities and geometry of the rank sparsity ball, Compressive sensing using chaotic sequence based on Chebyshev map, Toward a unified theory of sparse dimensionality reduction in Euclidean space, Foveated compressive imaging for low power vehicle fingerprinting and tracking in aerial imagery, Incomplete variables truncated conjugate gradient method for signal reconstruction in compressed sensing, Bayesian signal detection with compressed measurements, Kernel sparse representation for time series classification, On the solution uniqueness characterization in the L1 norm and polyhedral gauge recovery, Image reconstruction using analysis model prior, The sparsest solutions to \(Z\)-tensor complementarity problems, Convergence of iterative hard-thresholding algorithm with continuation, An iteratively approximated gradient projection algorithm for sparse signal reconstruction, Observability for initial value problems with sparse initial data, On uncertainty principles in the finite dimensional setting, Compressed sensing with coherent and redundant dictionaries, A semidefinite programming study of the Elfving theorem, Estimation of high-dimensional low-rank matrices, Estimation of (near) low-rank matrices with noise and high-dimensional scaling, Approximation of frame based missing data recovery, The Dantzig selector and sparsity oracle inequalities, Sparse recovery under matrix uncertainty, Approximation accuracy, gradient methods, and error bound for structured convex optimization, \(\ell_{1}\)-penalization for mixture regression models, Error estimates for orthogonal matching pursuit and random dictionaries, Phase transitions for greedy sparse approximation algorithms, Testing the nullspace property using semidefinite programming, On verifiable sufficient conditions for sparse signal recovery via \(\ell_{1}\) minimization, Null space conditions and thresholds for rank minimization, Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling, Recovery of signals under the condition on RIC and ROC via prior support information, Sparse Markowitz portfolio selection by using stochastic linear complementarity approach, Linear convergence of the randomized sparse Kaczmarz method, Image encryption technique combining compressive sensing with double random-phase encoding, A simpler approach to coefficient regularized support vector machines regression, Signal recovery under mutual incoherence property and oracle inequalities, Recovery analysis for weighted mixed \(\ell_2 / \ell_p\) minimization with \(0 < p \leq 1\), An overview on the applications of matrix theory in wireless communications and signal processing, A remark on joint sparse recovery with OMP algorithm under restricted isometry property, Sparse signal recovery with prior information by iterative reweighted least squares algorithm, A strong converse bound for multiple hypothesis testing, with applications to high-dimensional estimation, Sparse recovery in probability via \(l_q\)-minimization with Weibull random matrices for \(0 < q\leq 1\), A probabilistic learning algorithm for robust modeling using neural networks with random weights, DC formulations and algorithms for sparse optimization problems, Minimization of transformed \(L_1\) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing, Iterative reweighted methods for \(\ell _1-\ell _p\) minimization, The noncooperative transportation problem and linear generalized Nash games, Support vector machines regression with \(l^1\)-regularizer, On the sparseness of 1-norm support vector machines, On finding a generalized lowest rank solution to a linear semi-definite feasibility problem, Approximately normalized iterative hard thresholding for nonlinear compressive sensing, Compressive sensing in signal processing: algorithms and transform domain formulations, Efficient extreme learning machine via very sparse random projection, An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors, The convex geometry of linear inverse problems, Linear program relaxation of sparse nonnegative recovery in compressive sensing microarrays, Strengthening hash families and compressive sensing, Robust estimation for an inverse problem arising in multiview geometry, Full spark frames, Sparse total least squares: analysis and greedy algorithms, Compressed sensing and matrix completion with constant proportion of corruptions, An efficient algorithm for \(\ell_{0}\) minimization in wavelet frame based image restoration, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, Estimation of Gaussian graphs by model selection, On the conditions used to prove oracle results for the Lasso, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Restricted \(p\)-isometry properties of partially sparse signal recovery, A short note on compressed sensing with partially known signal support, Uniform recovery in infinite-dimensional compressed sensing and applications to structured binary sampling, Kernel conjugate gradient methods with random projections, A data-driven framework for sparsity-enhanced surrogates with arbitrary mutually dependent randomness, On orthogonal projections for dimension reduction and applications in augmented target loss functions for learning problems, The landscape of empirical risk for nonconvex losses, Uniqueness of the minimal \(l_1\)-norm solution to the monotone linear complementarity problem, Sparse approximation of fitting surface by elastic net, Optimality conditions for locally Lipschitz optimization with \(l_0\)-regularization, Sparse polynomial interpolation: sparse recovery, super-resolution, or Prony?, Sparse approximate reconstruction decomposed by two optimization problems, An algebraic perspective on integer sparse recovery, Computing the spark: mixed-integer programming for the (vector) matroid girth problem, Deterministic constructions of compressed sensing matrices based on optimal codebooks and codes, On the convergence of the iterates of proximal gradient algorithm with extrapolation for convex nonsmooth minimization problems, A sharp RIP condition for orthogonal matching pursuit, An augmented Lagrangian algorithm for total bounded variation regularization based image deblurring, Complexity and applications of the homotopy principle for uniformly constrained sparse minimization, Greedy variance estimation for the LASSO, Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations, Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems, Sparse space-time models: concentration inequalities and Lasso, Phase retrieval with PhaseLift algorithm, Sparse recovery in bounded Riesz systems with applications to numerical methods for PDEs, Application of ESN prediction model based on compressed sensing in stock market, A unified primal dual active set algorithm for nonconvex sparse recovery, MRI simulation-based evaluation of an efficient under-sampling approach, Data-driven algorithm selection and tuning in optimization and signal processing, Oracle posterior contraction rates under hierarchical priors, Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method, Dual-density-based reweighted \(\ell_1\)-algorithms for a class of \(\ell_0\)-minimization problems, The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning, Optimal portfolio selections via \(\ell_{1, 2}\)-norm regularization, Nonuniqueness of solutions of a class of \(\ell_0\)-minimization problems, A sparse optimization problem with hybrid \(L_2\)-\(L_p\) regularization for application of magnetic resonance brain images, On the computational complexity of the secure state-reconstruction problem, The vulnerability of distributed state estimator under stealthy attacks, Efficiency of orthogonal super greedy algorithm under the restricted isometry property, An inexact proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth optimization problems, Smoothing Newton method for \(\ell^0\)-\(\ell^2\) regularized linear inverse problem, An adaptation for iterative structured matrix completion, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, Regret lower bound and optimal algorithm for high-dimensional contextual linear bandit, Structured iterative hard thresholding with on- and off-grid applications, Asymptotic analysis for extreme eigenvalues of principal minors of random matrices, Flexible construction of measurement matrices in compressed sensing based on extensions of incidence matrices of combinatorial designs, The all-or-nothing phenomenon in sparse linear regression, Robust sparse recovery via a novel convex model, Partial gradient optimal thresholding algorithms for a class of sparse optimization problems, Sufficient conditions for the uniqueness of solution of the weighted norm minimization problem, Perturbation analysis of \(L_{1-2}\) method for robust sparse recovery, Weighted thresholding homotopy method for sparsity constrained optimization, Learning ``best kernels from data in Gaussian process regression. With application to aerodynamics, Commonsense explanations of sparsity, Zipf law, and Nash's bargaining solution, Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses, On the grouping effect of the \(l_{1-2}\) models, A convex relaxation framework consisting of a primal-dual alternative algorithm for solving \(\ell_0\) sparsity-induced optimization problems with application to signal recovery based image restoration, Search for sparse solutions of super-large systems with a tensor structure, A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks, Gradient projection Newton algorithm for sparse collaborative learning using synthetic and real datasets of applications, Hierarchical compressed sensing, Proof methods for robust low-rank matrix recovery, Unbiasing in iterative reconstruction algorithms for discrete compressed sensing, A fast algorithm for the semi-definite relaxation of the state estimation problem in power grids, On some aspects of approximation of ridge functions, Near oracle performance and block analysis of signal space greedy methods, Error analysis of reweighted \(l_1\) greedy algorithm for noisy reconstruction, Greedy signal space methods for incoherence and beyond, New analysis of manifold embeddings and signal recovery from compressive measurements, Stable recovery of analysis based approaches, Sparse learning via Boolean relaxations, An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization, Optimal \(D\)-RIP bounds in compressed sensing, The sparsest solution to the system of absolute value equations, On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions, Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery, Optimality conditions for sparse nonlinear programming, Greedy strategies for convex optimization, Further properties of the forward-backward envelope with applications to difference-of-convex programming, Split Bregman iteration algorithm for image deblurring using fourth-order total bounded variation regularization model, Compressed sensing by inverse scale space and curvelet thresholding, Reconstruction of sparse signals in impulsive disturbance environments, Error bounds for compressed sensing algorithms with group sparsity: A unified approach, Explicit universal sampling sets in finite vector spaces, A new bound on the block restricted isometry constant in compressed sensing, Adaptive step-size matching pursuit algorithm for practical sparse reconstruction, Improved RIP conditions for compressed sensing with coherent tight frames, The modified accelerated Bregman method for regularized basis pursuit problem, A primal Douglas-Rachford splitting method for the constrained minimization problem in compressive sensing, Newly deterministic construction of compressed sensing matrices via singular linear spaces over finite fields, Generalized sparse recovery model and its neural dynamical optimization method for compressed sensing, A refined convergence analysis of \(\mathrm{pDCA}_{e}\) with applications to simultaneous sparse recovery and outlier detection, Iteratively reweighted \(\ell _1\) algorithms with extrapolation, The linearized alternating direction method of multipliers for sparse group LAD model, A simple homotopy proximal mapping algorithm for compressive sensing, Model identification of a network as compressing sensing, Noncommutative Bennett and Rosenthal inequalities, Randomized first order algorithms with applications to \(\ell _{1}\)-minimization, Multi-stage convex relaxation for feature selection, Bounds of restricted isometry constants in extreme asymptotics: formulae for Gaussian matrices, On a unified view of nullspace-type conditions for recoveries associated with general sparsity structures, Heat source identification based on \(\ell_1\) constrained minimization, Partial \(S\)-goodness for partially sparse signal recovery, Concentration of \(S\)-largest mutilated vectors with \(\ell_p\)-quasinorm for \(0<p\leq 1\) and its applications, Restricted isometry property for random matrices with heavy-tailed columns, Optimized projections for compressed sensing via rank-constrained nearest correlation matrix, Deterministic constructions of compressed sensing matrices, Gaussian averages of interpolated bodies and applications to approximate reconstruction, The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder)., \(L^{1}\)-minimization methods for Hamilton-Jacobi equations: the one-dimensional case, Learning in compressed space, Robust multi-image processing with optimal sparse regularization, On the interval of fluctuation of the singular values of random matrices, The Moreau envelope based efficient first-order methods for sparse recovery, Stable recovery of low-rank matrix via nonconvex Schatten \(p\)-minimization, Conjugate gradient hard thresholding pursuit algorithm for sparse signal recovery, An inexact splitting method for the subspace segmentation from incomplete and noisy observations, Exact matrix completion via convex optimization, Proximal methods for reweighted \(l_Q\)-regularization of sparse signal recovery, A theoretical result of sparse signal recovery via alternating projection method, Sparse non-negative super-resolution -- simplified and stabilised, Smoothing strategy along with conjugate gradient algorithm for signal reconstruction, Newton method for \(\ell_0\)-regularized optimization, A necessary and sufficient condition for sparse vector recovery via \(\ell_1-\ell_2\) minimization, Sparse signal recovery from phaseless measurements via hard thresholding pursuit, Performance analysis of the compressed distributed least squares algorithm, The finite steps of convergence of the fast thresholding algorithms with \(f\)-feedbacks in compressed sensing, Newton-type optimal thresholding algorithms for sparse optimization problems, Robust signal recovery via \(\ell_{1-2}/ \ell_p\) minimization with partially known support, Explicit RIP matrices: an update, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, Compressed sensing of low-rank plus sparse matrices, Unnamed Item, A new hybrid \(l_p\)-\(l_2\) model for sparse solutions with applications to image processing, Adaptive iterative hard thresholding for low-rank matrix recovery and rank-one measurements, On unique recovery of finite-valued integer signals and admissible lattices of sparse hypercubes, The smoothed complexity of Frank-Wolfe methods via conditioning of random matrices and polytopes, On sparse recovery algorithms in unions of orthonormal bases, Lifted stationary points of sparse optimization with complementarity constraints, Uniqueness in nuclear norm minimization: flatness of the nuclear norm sphere and simultaneous polarization, Posterior information-based image measurement matrix optimization, Restricted isometry property for matrices whose entries are random variables belonging to some Orlicz spaces $L_U(\Omega )$, Duality and Convex Programming, A Fast Active Set Block Coordinate Descent Algorithm for $\ell_1$-Regularized Least Squares, Resolution Analysis of Imaging with $\ell_1$ Optimization, Low Complexity Regularization of Linear Inverse Problems, On the Minimization Over Sparse Symmetric Sets: Projections, Optimality Conditions, and Algorithms, Sparse PCA: Convex Relaxations, Algorithms and Applications, Motion Averaging in 3D Reconstruction Problems, Correcting Data Corruption Errors for Multivariate Function Approximation, Guarantees of Riemannian Optimization for Low Rank Matrix Recovery, A perturbation analysis of block-sparse compressed sensing via mixed ℓ2/ℓ1 minimization, On Low Rank Approximation of Linear Operators in p-Norms and Some Algorithms, Identification of interactions in fractional-order systems with high dimensions, Sparse Sensor Placement Optimization for Classification, Fast Implementation of ℓ 1-Greedy Algorithm, Additive Combinatorics: With a View Towards Computer Science and Cryptography—An Exposition, PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming, EXACT LOW-RANK MATRIX RECOVERY VIA NONCONVEX SCHATTEN p-MINIMIZATION, SPARSE RECONSTRUCTION OF HARDY SIGNAL AND APPLICATIONS TO TIME-FREQUENCY DISTRIBUTION, Sparse Representation of Signals in Hardy Space, Primal–dual first-order methods for a class of cone programming, On sparse representation of analytic signal in Hardy space, Explicit constructions of RIP matrices and related problems, Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices, Geometry of log-concave ensembles of random matrices and approximate reconstruction, Democracy in action: quantization, saturation, and compressive sensing, New bounds on the restricted isometry constant \(\delta _{2k}\), A note on the complexity of \(L _{p }\) minimization, Equivalence of minimal \(\ell _{0}\)- and \(\ell _{p }\)-norm solutions of linear equalities, inequalities and linear programs for sufficiently small \(p\), Steiner equiangular tight frames, Analysis of convergence for the alternating direction method applied to joint sparse recovery, Noise folding in completely perturbed compressed sensing, Nonlinear frames and sparse reconstructions in Banach spaces, Restricted \(p\)-isometry property and its application for nonconvex compressive sensing, On support sizes of restricted isometry constants, Improved bounds for restricted isometry constants, On the size of incoherent systems, A convergent relaxation of the Douglas-Rachford algorithm, New augmented Lagrangian-based proximal point algorithm for convex optimization with equality constraints, Enhancing sparsity of Hermite polynomial expansions by iterative rotations, Construction of a full row-rank matrix system for multiple scanning directions in discrete tomography, Instance-optimality in probability with an \(\ell _1\)-minimization decoder, Analysis of basis pursuit via capacity sets, Enhancing sparsity by reweighted \(\ell _{1}\) minimization, Compressive sampling and rapid reconstruction of broadband frequency hopping signals with interference, Overcomplete blind source separation based on generalized Gaussian function and SL0 norm, A null space analysis of the \(\ell_1\)-synthesis method in dictionary-based compressed sensing, Phase retrieval for sparse signals, Accuracy guaranties for \(\ell_{1}\) recovery of block-sparse signals, On the \(\ell_1\)-norm invariant convex \(k\)-sparse decomposition of signals, GPU accelerated greedy algorithms for compressed sensing, The \(\ell_{2,q}\) regularized group sparse optimization: lower bound theory, recovery bound and algorithms, Preserving injectivity under subgaussian mappings and its application to compressed sensing, Rigorous restricted isometry property of low-dimensional subspaces, Theoretical guarantees for graph sparse coding, A theoretical perspective of solving phaseless compressive sensing via its nonconvex relaxation, Minimizers of sparsity regularized Huber loss function, Functional linear regression that's interpretable, Some sharp performance bounds for least squares regression with \(L_1\) regularization, Fast \(\ell _{1}\) minimization by iterative thresholding for multidimensional NMR spectroscopy, A sublinear algorithm for the recovery of signals with sparse Fourier transform when many samples are missing, A primal dual active set with continuation algorithm for the \(\ell^0\)-regularized optimization problem, Efficient nonconvex sparse group feature selection via continuous and discrete optimization, Optimal dual certificates for noise robustness bounds in compressive sensing, Approximation of rectangular beta-Laguerre ensembles and large deviations, Stability of the elastic net estimator, Sampling in the analysis transform domain, Sharp non-asymptotic performance bounds for \(\ell_1\) and Huber robust regression estimators, Interpolation via weighted \(\ell_{1}\) minimization, From compression to compressed sensing, A strong restricted isometry property, with an application to phaseless compressed sensing, A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery, The restricted isometry property and its implications for compressed sensing, A remark on compressed sensing, Sparse approximate solution of partial differential equations, Sparse recovery by non-convex optimization - instance optimality, A note on guaranteed sparse recovery via \(\ell_1\)-minimization, High-dimensional Gaussian model selection on a Gaussian design, The benefit of group sparsity, Lasso-type recovery of sparse representations for high-dimensional data, Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Sparsest solutions of underdetermined linear systems via \( \ell _q\)-minimization for \(0<q\leqslant 1\), Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit, Random sampling of sparse trigonometric polynomials. II: Orthogonal matching pursuit versus basis pursuit, Random projections of smooth manifolds, Sparse solutions to underdetermined Kronecker product systems, A simple proof of the restricted isometry property for random matrices, Uniform uncertainty principle for Bernoulli and subgaussian ensembles, Signal recovery under cumulative coherence, Secure estimation based Kalman filter for cyber-physical systems against sensor attacks, On the post selection inference constant under restricted isometry properties, Gaussian polytopes: a cumulant-based approach, Observable dictionary learning for high-dimensional statistical inference, A primal-dual homotopy algorithm for \(\ell _{1}\)-minimization with \(\ell _{\infty }\)-constraints, Sparse approximate solution of fitting surface to scattered points by MLASSO model, Compressed sensing for real measurements of quaternion signals, Sparsity and independence: balancing two objectives in optimization for source separation with application to fMRI analysis, Compressed sensing of data with a known distribution, Expander \(\ell_0\)-decoding, \(l_{0}\)-norm based structural sparse least square regression for feature selection, Recovery of block sparse signals under the conditions on block RIC and ROC by BOMP and BOMMP, Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression, Restricted Robinson constraint qualification and optimality for cardinality-constrained cone programming, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, The matrix splitting based proximal fixed-point algorithms for quadratically constrained \(\ell_{1}\) minimization and Dantzig selector, Analysis of the equivalence relationship between \(l_{0}\)-minimization and \(l_{p}\)-minimization, Online fault diagnosis for nonlinear power systems, Exact recovery of sparse multiple measurement vectors by \(l_{2,p}\)-minimization, A hierarchical framework for recovery in compressive sensing, Sparse blind deconvolution and demixing through \(\ell_{1,2}\)-minimization, Deterministic construction of compressed sensing matrices based on semilattices, Hybrid reconstruction of quantum density matrix: when low-rank meets sparsity, A Rice method proof of the null-space property over the Grassmannian, A sharp recovery condition for block sparse signals by block orthogonal multi-matching pursuit, Random matrices and erasure robust frames, A group adaptive elastic-net approach for variable selection in high-dimensional linear regression, Learning data discretization via convex optimization, Sparsity and incoherence in orthogonal matching pursuit, Linear regression with sparsely permuted data, Optimization methods for regularization-based ill-posed problems: a survey and a multi-objective framework, Fused Lasso penalized least absolute deviation estimator for high dimensional linear regression, Adaptive iterative hard thresholding for least absolute deviation problems with sparsity constraints, Local recovery bounds for prior support constrained compressed sensing, Weighted \(\ell_p\) (\(0<p\le 1\)) minimization with non-uniform weights for sparse recovery under partial support information, Sparse PSD approximation of the PSD cone, Sparse representation of vectors in lattices and semigroups, Fundamental barriers to high-dimensional regression with convex penalties, Low-rank matrix recovery with Ky Fan 2-\(k\)-norm, Exploiting prior knowledge in compressed sensing to design robust systems for endoscopy image recovery, Sparse solutions to an underdetermined system of linear equations via penalized Huber loss, Sparse high-dimensional linear regression. Estimating squared error and a phase transition, Wigner and Wishart ensembles for sparse Vinberg models, De-biasing the Lasso with degrees-of-freedom adjustment, On the optimization landscape of tensor decompositions, Iteratively reweighted least squares and slime mold dynamics: connection and convergence, Compressive sensing of high betweenness centrality nodes in networks, Sparse signal recovery via generalized Gaussian function, The restricted isometry property of block diagonal matrices for group-sparse signal recovery, A new sufficient condition for sparse recovery with multiple orthogonal least squares, Screening for a reweighted penalized conditional gradient method, Gradient projection Newton pursuit for sparsity constrained optimization, The springback penalty for robust signal recovery, Generalizing CoSaMP to signals from a union of low dimensional linear subspaces, A hybrid quasi-Newton projected-gradient method with application to lasso and basis-pursuit denoising, Smoothing inertial projection neural network for minimization \(L_{p-q}\) in sparse signal reconstruction, Parametrized quasi-soft thresholding operator for compressed sensing and matrix completion, An improved total variation regularized RPCA for moving object detection with dynamic background, Sparse signal reconstruction via the approximations of \(\ell_0\) quasinorm, Iterative hard thresholding for compressed data separation, A smoothing method for sparse optimization over convex sets, A filtered bucket-clustering method for projection onto the simplex and the \(\ell_1\) ball, Subspace learning by \(\ell^0\)-induced sparsity, Flexible semi-supervised embedding based on adaptive loss regression: application to image categorization, Deterministic construction of compressed sensing matrices from constant dimension codes, Group sparse recovery in impulsive noise via alternating direction method of multipliers, A perturbation analysis of nonconvex block-sparse compressed sensing, Effective zero-norm minimization algorithms for noisy compressed sensing, Geological facies recovery based on weighted \(\ell_1\)-regularization, Three deterministic constructions of compressed sensing matrices with low coherence, Deletion correcting codes meet the Littlewood-Offord problem, Numerical aspects for approximating governing equations using data, A general framework for Bayes structured linear models, Level set methods for stochastic discontinuity detection in nonlinear problems, New preconditioners applied to linear programming and the compressive sensing problems, Phaseless compressive sensing using partial support information, Discrete optimization methods for group model selection in compressed sensing, Convergence and stability analysis of iteratively reweighted least squares for noisy block sparse recovery, A two-step iterative algorithm for sparse hyperspectral unmixing via total variation, The nonnegative zero-norm minimization under generalized \(Z\)-matrix measurement, Sharp RIP bound for sparse signal and low-rank matrix recovery, A new iterative firm-thresholding algorithm for inverse problems with sparsity constraints, A reweighted nuclear norm minimization algorithm for low rank matrix recovery, Stable restoration and separation of approximately sparse signals, Fast thresholding algorithms with feedbacks for sparse signal recovery, Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO, Sparse signals recovery from noisy measurements by orthogonal matching pursuit, Every Hilbert space frame has a Naimark complement, Sparse trace norm regularization, Necessary and sufficient conditions of solution uniqueness in 1-norm minimization, Sparse-view ultrasound diffraction tomography using compressed sensing with nonuniform FFT, Cirrhosis classification based on texture classification of random features, A numerical exploration of compressed sampling recovery, Stability and instance optimality for Gaussian measurements in compressed sensing, Construction of highly redundant incoherent unit norm tight frames as a union of orthonormal bases, The convergence guarantee of the iterative hard thresholding algorithm with suboptimal feedbacks for large systems, On convex envelopes and regularization of non-convex functionals without moving global minima, On the computation of sparse solutions to the controllability problem for discrete-time linear systems, On a monotone scheme for nonconvex nonsmooth optimization with applications to fracture mechanics, On rank awareness, thresholding, and MUSIC for joint sparse recovery, Signal reconstruction by conjugate gradient algorithm based on smoothing \(l_1\)-norm, Distributed secure state estimation for cyber-physical systems under sensor attacks, Recovering low-rank and sparse matrix based on the truncated nuclear norm, A survey of randomized algorithms for training neural networks, Sorted concave penalized regression, An effective algorithm for the spark of sparse binary measurement matrices, On the differences between \(L_2\) boosting and the Lasso, A compressive sensing based privacy preserving outsourcing of image storage and identity authentication service in cloud, An efficient privacy-preserving compressive data gathering scheme in WSNs, An accelerated version of alternating direction method of multipliers for TV minimization in EIT, On sparse beamformer design with reverberation, A new smoothed L0 regularization approach for sparse signal recovery, A new sparse recovery method for the inverse acoustic scattering problem, Erasure recovery matrices for encoder protection, RIP-based performance guarantee for low-tubal-rank tensor recovery, On privacy preserving data release of linear dynamic networks, Accelerated iterative hard thresholding algorithm for \(l_0\) regularized regression problem, Compressed dictionary learning, The sparsest solution of the union of finite polytopes via its nonconvex relaxation, Deterministic constructions of compressed sensing matrices based on codes, New conditions on stable recovery of weighted sparse signals via weighted \(l_1\) minimization, On the strong restricted isometry property of Bernoulli random matrices, Truncated sparse approximation property and truncated \(q\)-norm minimization, Alternating direction and Taylor expansion minimization algorithms for unconstrained nuclear norm optimization, Compressed sensing MR image reconstruction exploiting TGV and wavelet sparsity, Optimal RIP bounds for sparse signals recovery via \(\ell_p\) minimization, Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit, A modified primal-dual method with applications to some sparse recovery problems, Outlier deletion based improvement on the stomp algorithm for sparse solution of large-scale underdetermined problems, Weaker regularity conditions and sparse recovery in high-dimensional regression, A smoothed \(l_0\)-norm and \(l_1\)-norm regularization algorithm for computed tomography, Preconditioning for orthogonal matching pursuit with noisy and random measurements: the Gaussian case, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Sensitivity Analysis for Mirror-Stratifiable Convex Functions, Sparse signal recovery via non-convex optimization and overcomplete dictionaries, Deterministic construction of compressed sensing matrices with characters over finite fields, Iterative positive thresholding algorithm for non-negative sparse optimization, A General Theory of Singular Values with Applications to Signal Denoising, Sparsest representations and approximations of an underdetermined linear system, Multicompartment magnetic resonance fingerprinting, An efficient superpixel-based sparse representation framework for hyperspectral image classification, CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Guarantees of total variation minimization for signal recovery, Time for dithering: fast and quantized random embeddings via the restricted isometry property, Super-resolution of point sources via convex programming, A null-space-based weightedl1minimization approach to compressed sensing, Proximal Mapping for Symmetric Penalty and Sparsity, Flavors of Compressive Sensing, Batched Stochastic Gradient Descent with Weighted Sampling, Sliced-Inverse-Regression--Aided Rotated Compressive Sensing Method for Uncertainty Quantification, Hierachical Bayesian models and sparsity: 2 -magic, Consistency Analysis for Massively Inconsistent Datasets in Bound-to-Bound Data Collaboration, Inverse scale space decomposition, Greedy forward regression for variable screening, A Tight Bound of Hard Thresholding, Tight and full spark Chebyshev frames with real entries and worst-case coherence analysis, What is a Frame? Theory and Applications of Frames, A Multiple Measurement Vector Approach to Synthetic Aperture Radar Imaging, Spherical Designs and Nonconvex Minimization for Recovery of Sparse Signals on the Sphere, Frames for compressed sensing using coherence, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, An adaptive inverse scale space method for compressed sensing, $\ell _0$ Minimization for wavelet frame based image restoration, ELASTIC-NET REGULARIZATION FOR LOW-RANK MATRIX RECOVERY, Estimator of prediction error based on approximate message passing for penalized linear regression, Welch bound-achieving compressed sensing matrices from optimal codebooks, Deterministic Construction of Compressed Sensing Matrices from Codes, Decoupling noise and features via weighted ℓ 1 -analysis compressed sensing, Stochastic Collocation Methods via Minimisation of the Transformed L<sub>1</sub>-Penalty, Bayesian sparse regularization for multiple force identification and location in time domain, Binary sparse signal recovery with binary matching pursuit *, Sparse Solutions by a Quadratically Constrained ℓq (0 <q< 1) Minimization Model, Analysis and Algorithms for Some Compressed Sensing Models Based on L1/L2 Minimization, 1αℓ 2 minimization methods for signal and image reconstruction with impulsive noise removal, The Dantzig selector: recovery of signal via ℓ 1 − αℓ 2 minimization, A new sufficient condition for sparse vector recovery via ℓ1 − ℓ2 local minimization, Robust Width: A Characterization of Uniformly Stable and Robust Compressed Sensing, An Algorithm Solving Compressive Sensing Problem Based on Maximal Monotone Operators, PHASE TRANSITIONS IN ERROR CORRECTING AND COMPRESSED SENSING BY ℓ1 LINEAR PROGRAMMING, SYLVESTER–GALLAI TYPE THEOREMS FOR APPROXIMATE COLLINEARITY, An L p Norm Relaxation Approach to Positive Influence Maximization in Social Network under the Deterministic Linear Threshold Model, Performance comparisons of greedy algorithms in compressed sensing, A Generalized Class of Hard Thresholding Algorithms for Sparse Signal Recovery, Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems, Stability of lq-analysis based dual frame with Weibull matrices for 0 < q ≤ 1, Sparse Approximation using $\ell_1-\ell_2$ Minimization and Its Application to Stochastic Collocation, Stochastic Collocation Methods via $\ell_1$ Minimization Using Randomized Quadratures, Constructing New Weighted 1-Algorithms for the Sparsest Points of Polyhedral Sets, Necessary and sufficient conditions for linear convergence of ℓ1-regularization, Counting faces of randomly projected polytopes when the projection radically lowers dimension, Compressed sensing and best 𝑘-term approximation, COMPRESSED SENSING BY ITERATIVE THRESHOLDING OF GEOMETRIC WAVELETS: A COMPARING STUDY, CURVELET-WAVELET REGULARIZED SPLIT BREGMAN ITERATION FOR COMPRESSED SENSING, A review on restoration of seismic wavefields based on regularization and compressive sensing, On perturbed steepest descent methods with inexact line search for bilevel convex optimization, Multilevel preconditioning and adaptive sparse solution of inverse problems, Necessary and Sufficient Conditions for Noiseless Sparse Recovery via Convex Quadratic Splines, Truncated $l_{1-2}$ Models for Sparse Recovery and Rank Minimization, Identifying small mean-reverting portfolios, Optimal Experimental Design for Inverse Problems with State Constraints, Fast Phase Retrieval from Local Correlation Measurements, Illumination Strategies for Intensity-Only Imaging, An Introduction to Compressed Sensing, Reconstruction of sparse connectivity in neural networks from spike train covariances, Statistical mechanics of complex neural systems and high dimensional data, Typical reconstruction limits for distributed compressed sensing based on ℓ2,1-norm minimization and Bayesian optimal reconstruction, Evaluation of generalized degrees of freedom for sparse estimation by replica method, Cross validation in LASSO and its acceleration, Sparse approximation based on a random overcomplete basis, Typical reconstruction performance for distributed compressed sensing based on ℓ2,1-norm regularized least square and Bayesian optimal reconstruction: influences of noise, A statistical mechanics approach to de-biasing and uncertainty estimation in LASSO for random measurements, Dictionary-Based Classification Models. Applications for Multichannel Neural Activity Analysis, Fast sparse reconstruction: Greedy inverse scale space flows, Eventual linear convergence of the Douglas-Rachford iteration for basis pursuit, Efficient Blind Compressed Sensing Using Sparsifying Transforms with Convergence Guarantees and Application to Magnetic Resonance Imaging, Compressive Sensing with Redundant Dictionaries and Structured Measurements, Global Convergence of Splitting Methods for Nonconvex Composite Optimization, THE RESTRICTED ISOMETRY PROPERTY FOR SIGNAL RECOVERY WITH COHERENT TIGHT FRAMES, Constructing Surrogate Models of Complex Systems with Enhanced Sparsity: Quantifying the Influence of Conformational Uncertainty in Biomolecular Solvation, On the Doubly Sparse Compressed Sensing Problem, A Survey of Compressed Sensing, Compressed Sensing, Sparse Inversion, and Model Mismatch, Regularized sparse representation for image deconvolution, Statistical challenges of high-dimensional data, Surface Reconstruction via L 1-Minimization, Unnamed Item, A polynomial algorithm for best-subset selection problem, Unnamed Item, Unnamed Item, Decoding from Pooled Data: Sharp Information-Theoretic Bounds, ISLET: Fast and Optimal Low-Rank Tensor Regression via Importance Sketching, Optimizing Sparsity over Lattices and Semigroups, A general theory for subspace-sparse recovery, Estimation of block sparsity in compressive sensing, On block accelerations of quantile randomized Kaczmarz for corrupted systems of linear equations, An Alternating Method for Cardinality-Constrained Optimization: A Computational Study for the Best Subset Selection and Sparse Portfolio Problems, Unnamed Item, Unnamed Item, Quantile-Based Iterative Methods for Corrupted Systems of Linear Equations, The Noise Collector for sparse recovery in high dimensions, Approximation Bounds for Sparse Programs, Image Segmentation via Fischer-Burmeister Total Variation and Thresholding, The smoothing objective penalty function method for two-cardinality sparse constrained optimization problems, Random Sampling and Reconstruction of Sparse Time- and Band-Limited Signals, Performance analysis for unconstrained analysis based approaches*, Asymptotic Theory of \(\boldsymbol \ell _1\) -Regularized PDE Identification from a Single Noisy Trajectory, Recovery analysis for block ℓp − ℓ1 minimization with prior support information, Robust recovery of a kind of weighted l1-minimization without noise level, Nonlinear Iterative Hard Thresholding for Inverse Scattering, $N$-Dimensional Tensor Completion for Nuclear Magnetic Resonance Relaxometry, On the distribution of an arbitrary subset of the eigenvalues for some finite dimensional random matrices, Sparse Probability Assessment Heuristic Based on Orthogonal Matching Pursuit, Robust recovery of signals with partially known support information using weighted BPDN, An unbiased approach to compressed sensing, Generalized Regression Estimators with High-Dimensional Covariates, Stability analysis of a class of sparse optimization problems, Inferring Sparse Preference Lists from Partial Information, A Sampling Theorem for Deconvolution in Two Dimensions, Distance-Sparsity Transference for Vertices of Corner Polyhedra, Sparsity Promoting Hybrid Solvers for Hierarchical Bayesian Inverse Problems, Fast Thresholding Algorithms with Feedbacks and Partially Known Support for Compressed Sensing, Error Localization of Best $L_{1}$ Polynomial Approximants, A Gradient-Enhanced L1 Approach for the Recovery of Sparse Trigonometric Polynomials, The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty, Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices, A Regularized Correntropy Framework for Robust Pattern Recognition, Optimal $k$-Thresholding Algorithms for Sparse Optimization Problems, An Efficient Proximal Block Coordinate Homotopy Method for Large-Scale Sparse Least Squares Problems, Random Projections for Linear Programming, Sparse Recovery via Partial Regularization: Models, Theory, and Algorithms, Weak Stability of 1-Minimization Methods in Sparse Data Reconstruction, An improved bound of cumulative coherence for signal recovery, Minimization of the difference of Nuclear and Frobenius norms for noisy low rank matrix recovery, Sparse recovery from extreme eigenvalues deviation inequalities, Modern regularization methods for inverse problems, Hessian Barrier Algorithms for Linearly Constrained Optimization Problems, Exact Camera Location Recovery by Least Unsquared Deviations, Characterization of ℓ1 minimizer in one-bit compressed sensing, A Hybrid Finite-Dimensional RHC for Stabilization of Time-Varying Parabolic Equations, Perturbation Analysis of Orthogonal Least Squares, Quasi-linear Compressed Sensing, Minimization of $\ell_{1-2}$ for Compressed Sensing, Euclidean arrangements in Banach spaces, The Restricted Isometry Property of Subsampled Fourier Matrices, Compressive Sampling for Energy Spectrum Estimation of Turbulent Flows, A New Computational Method for the Sparsest Solutions to Systems of Linear Equations, On the Absence of Uniform Recovery in Many Real-World Applications of Compressed Sensing and the Restricted Isometry Property and Nullspace Property in Levels, Nonuniform recovery of fusion frame structured sparse signals, Dynamical sampling in multiply generated shift-invariant spaces, Stable recovery of sparse signals with coherent tight frames via lp-analysis approach, A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions, Block-sparse compressed sensing: non-convex model and iterative re-weighted algorithm, A dual split Bregman method for fast $\ell ^1$ minimization, Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity, Recovery of low-rank matrices based on the rank null space properties, Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery, CHAOTIC ANALOG-TO-INFORMATION CONVERSION: PRINCIPLE AND RECONSTRUCTABILITY WITH PARAMETER IDENTIFIABILITY, Towards a Mathematical Theory of Super‐resolution, Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning, A time continuation based fast approximate algorithm for compressed sensing related optimization, Sparse Solutions of Linear Diophantine Equations, Low-Rank Tensor Recovery using Sequentially Optimal Modal Projections in Iterative Hard Thresholding (SeMPIHT), Influences of preconditioning on the mutual coherence and the restricted isometry property of Gaussian/Bernoulli measurement matrices, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Compressed sensing, Structured sparsity through convex optimization, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, A Plaintext-Related Image Encryption Algorithm Based on Compressive Sensing and a Novel Hyperchaotic System, Weighted lp − l1 minimization methods for block sparse recovery and rank minimization, Lower Memory Oblivious (Tensor) Subspace Embeddings with Fewer Random Bits: Modewise Methods for Least Squares, Robust signal recovery for ℓ 1–2 minimization via prior support information, Supervised-distributed control with joint performance and communication optimisation, Sparse Solutions of a Class of Constrained Optimization Problems, Bayesian hierarchical dictionary learning, A new result on recovery sparse signals using orthogonal matching pursuit, Gaining Outlier Resistance With Progressive Quantiles: Fast Algorithms and Theoretical Studies, Required Number of Iterations for Sparse Signal Recovery via Orthogonal Least Squares, An efficient semismooth Newton method for adaptive sparse signal recovery problems, Exact penalization for cardinality and rank-constrained optimization problems via partial regularization, Stability bounds for reconstruction from sampling erasures, Compressed data separation via unconstrained l1-split analysis, Improved RIP-based bounds for guaranteed performance of two compressed sensing algorithms, Algebraic compressed sensing, Unnamed Item, Iterative hard thresholding for low CP-rank tensor models, Doubly majorized algorithm for sparsity-inducing optimization problems with regularizer-compatible constraints, Sparse optimization via vector \(k\)-norm and DC programming with an application to feature selection for support vector machines, Global convergence of the gradient method for functions definable in o-minimal structures, Heuristics for Finding Sparse Solutions of Linear Inequalities, Block-sparse recovery and rank minimization using a weighted \(l_p-l_q\) model, Sparse reconstruction via the mixture optimization model with iterative support estimate, Convergence of the Momentum Method for Semialgebraic Functions with Locally Lipschitz Gradients, Smoothing inertial neurodynamic approach for sparse signal reconstruction via \(L_p\)-norm minimization, Weighted sparsity regularization for source identification for elliptic PDEs, Penalized wavelet nonparametric univariate logistic regression for irregular spaced data, Sparse approximation over the cube, A variable projection method for large-scale inverse problems with \(\ell^1\) regularization, Communication-efficient estimation for distributed subset selection, A framework of regularized low-rank matrix models for regression and classification, Approximation Analysis of Convolutional Neural Networks, Secure state estimation for cyber-physical systems by unknown input observer with adaptive switching mechanism, A unified analysis of convex and non-convex \(\ell_p\)-ball projection problems, Doubly iteratively reweighted algorithm for constrained compressed sensing models, Proximal variable metric method with spectral diagonal update for large scale sparse optimization, Study on \(L_1\) over \(L_2\) Minimization for nonnegative signal recovery, From irrevocably modulated filtrations to dynamical equations over random networks, Support Recovery and Parameter Identification of Multivariate ARMA Systems with Exogenous Inputs, Optimal Parseval frames: total coherence and total volume, A Variable Density Sampling Scheme for Compressive Fourier Transform Interferometry, Heavy-ball-based optimal thresholding algorithms for sparse linear inverse problems, Spatial feature extraction non-negative tensor factorization for hyperspectral unmixing, Heavy-ball-based hard thresholding algorithms for sparse signal recovery, Binary generalized orthogonal matching pursuit, Stable recovery of weighted sparse signals from phaseless measurements via weighted l1 minimization, Sampling rates for \(\ell^1\)-synthesis, \(\boldsymbol{L_1-\beta L_q}\) Minimization for Signal and Image Recovery, Jointly evolving and compressing fuzzy system for feature reduction and classification, LASSO Reloaded: A Variational Analysis Perspective with Applications to Compressed Sensing, Nonconvex \(\ell_p-\alpha\ell_q\) minimization method and \(p\)-RIP condition for stable recovery of approximately \(k\)-sparse signals, Enhanced total variation minimization for stable image reconstruction, Distributed sparse identification for stochastic dynamic systems under cooperative non-persistent excitation condition, A reduced half thresholding algorithm, Relative Lipschitz-like Property of Parametric Systems via Projectional Coderivatives, A global two-stage algorithm for non-convex penalized high-dimensional linear regression problems, A tight bound of modified iterative hard thresholding algorithm for compressed sensing., Facets of high-dimensional Gaussian polytopes, Some results on OMP algorithm for MMV problem, Scaled proximal gradient methods for sparse optimization problems, Optimality conditions for Tucker low-rank tensor optimization, Secure estimation against non‐fixed channel attacks in cyber‐physical systems, Stable Recovery of Sparsely Corrupted Signals Through Justice Pursuit De-Noising, Recent Theoretical Advances in Non-Convex Optimization