The Sample Complexity of WeightedSparse Approximation

From MaRDI portal
Publication:4619617




Abstract: For Gaussian sampling matrices, we provide bounds on the minimal number of measurements m required to achieve robust weighted sparse recovery guarantees in terms of how well a given prior model for the sparsity support aligns with the true underlying support. Our main contribution is that for a sparse vector supported on an unknown set mathcalSsubset1,dots,N with |mathcalS|leqk, if mathcalS has emph{weighted cardinality} omega(mathcalS):=sumjinmathcalSomegaj2, and if the weights on mathcalSc exhibit mild growth, omegaj2geqgammalog(j/omega(mathcalS)) for jinmathcalSc and gamma>0, then the sample complexity for sparse recovery via weighted ell1-minimization using weights omegaj is linear in the weighted sparsity level, and m=mathcalO(omega(mathcalS)/gamma). This main result is a generalization of special cases including a) the standard sparse recovery setting where all weights omegajequiv1, and m=mathcalOleft(klogleft(N/kight)ight); b) the setting where the support is known a priori, and m=mathcalO(k); and c) the setting of sparse recovery with prior information, and m depends on how well the weights are aligned with the support set mathcalS. We further extend the results in case c) to the setting of additive noise. Our results are {em nonuniform} that is they apply for a fixed support, unknown a priori, and the weights on mathcalS do not all have to be smaller than the weights on mathcalSc for our recovery results to hold.










This page was built for publication: The Sample Complexity of Weighted<?Pub _newline ?>Sparse Approximation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4619617)