Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
From MaRDI portal
Publication:2989183
Abstract: Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization -- the firm shrinkage nonlinearity and the minimax nonlinearity -- and also nonscalar denoisers -- block thresholding, monotone regression, and total variation minimization. Let the variables eps = k/N and delta = n/N denote the generalized sparsity and undersampling fractions for sampling the k-generalized-sparse N-vector x_0 according to y=Ax_0. Here A is an n imes N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve delta = delta(eps) separating successful from unsuccessful reconstruction of x_0 by AMP is given by: delta = M(eps| Denoiser), where M(eps| Denoiser) denotes the per-coordinate minimax mean squared error (MSE) of the specified, optimally-tuned denoiser in the directly observed problem y = x + z. In short, the phase transition of a noiseless undersampling problem is identical to the minimax MSE in a denoising problem.
Cited in
(26)- Parametrized quasi-soft thresholding operator for compressed sensing and matrix completion
- Efficient Threshold Selection for Multivariate Total Variation Denoising
- Universality of approximate message passing algorithms
- A tight bound of hard thresholding
- A simple homotopy proximal mapping algorithm for compressive sensing
- Performance comparisons of greedy algorithms in compressed sensing.
- A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery
- Minimax risk of matrix denoising by singular value thresholding
- CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion
- Debiasing the Lasso: optimal sample size for Gaussian designs
- Estimation of low-rank matrices via approximate message passing
- Activation function design for deep networks: linearity and effective initialisation
- The committee machine: computational to statistical gaps in learning a two-layers neural network
- Recovering structured signals in noise: least-squares meets compressed sensing
- Phase Transitions in Recovery of Structured Signals From Corrupted Measurements
- Content-aware compressive sensing recovery using Laplacian scale mixture priors and side information
- Typical reconstruction performance for distributed compressed sensing based on \(\ell_{2,1} \)-norm regularized least square and Bayesian optimal reconstruction: influences of noise
- Asymptotic risk and phase transition of \(l_1\)-penalized robust estimator
- LASSO risk and phase transition under dependence
- Sharp MSE bounds for proximal denoising
- An introduction to compressed sensing
- Guarantees of total variation minimization for signal recovery
- Optimal Phase Transitions in Compressed Sensing
- Universality in polytope phase transitions and message passing algorithms
- Plug in estimation in high dimensional linear inverse problems a rigorous analysis
- On convergence of the cavity and Bolthausen's TAP iterations to the local magnetization
This page was built for publication: Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2989183)