Simple bounds for recovering low-complexity models

From MaRDI portal
Publication:378116

DOI10.1007/S10107-012-0540-0zbMATH Open1278.15038arXiv1106.1474OpenAlexW2008997396MaRDI QIDQ378116FDOQ378116


Authors: Benjamin Recht, Emmanuel J. Candès Edit this on Wikidata


Publication date: 11 November 2013

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Abstract: This note presents a unified analysis of the recovery of simple objects from random linear measurements. When the linear functionals are Gaussian, we show that an s-sparse vector in R^n can be efficiently recovered from 2s log n measurements with high probability and a rank r, n by n matrix can be efficiently recovered from r(6n-5r) with high probability. For sparse vectors, this is within an additive factor of the best known nonasymptotic bounds. For low-rank matrices, this matches the best known bounds. We present a parallel analysis for block sparse vectors obtaining similarly tight bounds. In the case of sparse and block sparse signals, we additionally demonstrate that our bounds are only slightly weakened when the measurement map is a random sign matrix. Our results are based on analyzing a particular dual point which certifies optimality conditions of the respective convex programming problem. Our calculations rely only on standard large deviation inequalities and our analysis is self-contained.


Full work available at URL: https://arxiv.org/abs/1106.1474




Recommendations




Cites Work


Cited In (36)





This page was built for publication: Simple bounds for recovering low-complexity models

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q378116)