Robust Decoding from 1-Bit Compressive Sampling with Ordinary and Regularized Least Squares
From MaRDI portal
Publication:3174766
Abstract: In 1-bit compressive sensing (1-bit CS) where target signal is coded into a binary measurement, one goal is to recover the signal from noisy and quantized samples. Mathematically, the 1-bit CS model reads: , where , , and is the random error before quantization and is a random vector modeling the sign flips. Due to the presence of nonlinearity, noise and sign flips, it is quite challenging to decode from the 1-bit CS. In this paper, we consider least squares approach under the over-determined and under-determined settings. For , we show that, up to a constant , with high probability, the least squares solution approximates with precision as long as . For , we prove that, up to a constant , with high probability, the -regularized least-squares solution lies in the ball with center and radius provided that and . We introduce a Newton type method, the so-called primal and dual active set (PDAS) algorithm, to solve the nonsmooth optimization problem. The PDAS possesses the property of one-step convergence. It only requires to solve a small least squares problem on the active set. Therefore, the PDAS is extremely efficient for recovering sparse signals through continuation. We propose a novel regularization parameter selection rule which does not introduce any extra computational overhead. Extensive numerical experiments are presented to illustrate the robustness of our proposed model and the efficiency of our algorithm.
Recommendations
- Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
- One-Bit Compressive Sensing With Norm Estimation
- Quantization of compressive samples with stable and robust recovery
- Robust one-bit compressed sensing with partial circulant matrices
- Robust 1-bit compressed sensing via hinge loss minimization
- 1-bit compressive sensing: reformulation and RRSP-based sign recovery theory
- On Compressive Sensing in Coding Problems: A Rigorous Approach
- Stability of 1-bit compressed sensing in sparse data reconstruction
- One-bit compressed sensing by greedy algorithms
- One-bit compressed sensing by linear programming
Cites work
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- A Primal Dual Active Set Algorithm With Continuation for Compressed Sensing
- A duality-based splitting method for \(\ell^1-TV\) image restoration with automatic regularization parameter choice
- A mathematical introduction to compressive sensing
- A new approach to variable selection in least squares problems
- A nonsmooth version of Newton's method
- A preconditioner for a primal-dual Newton conjugate gradient method for compressed sensing problems
- A primal dual active set with continuation algorithm for the \(\ell^0\)-regularized optimization problem
- A regularization parameter for nonsmooth Tikhonov regularization
- A semismooth Newton method for \(\mathrm{L}^1\) data fitting with automatic choice of regularization parameters and noise calibration
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- A wavelet tour of signal processing. The sparse way.
- Atomic Decomposition by Basis Pursuit
- Compressed sensing
- Estimation in high dimensions: a geometric perspective
- High-dimensional estimation with geometric constraints
- Information criteria and statistical modeling.
- Introductory lectures on convex optimization. A basic course.
- Inverse problems. Tikhonov theory and algorithms
- Iterative parameter choice by discrepancy principle
- Lagrange Multiplier Approach to Variational Problems and Applications
- Matrix-free interior point method for compressed sensing problems
- Noisy 1-bit compressive sensing: models and algorithms
- One-Bit Compressive Sensing With Norm Estimation
- One-bit compressed sensing by greedy algorithms
- One-bit compressed sensing by linear programming
- One-bit compressed sensing with non-Gaussian measurements
- Optimization with sparsity-inducing penalties
- Proximal splitting methods in signal processing
- Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
- Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach
- Robust 1-bit Compressive Sensing Using Adaptive Outlier Pursuit
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Selected Works of David Brillinger
- Signal Recovery by Proximal Forward-Backward Splitting
- Trust, But Verify: Fast and Accurate Signal Recovery From 1-Bit Compressive Measurements
Cited in
(11)- A primal dual active set with continuation algorithm for high-dimensional nonconvex SICA-penalized regression
- A unified primal dual active set algorithm for nonconvex sparse recovery
- Quadratic Convergence of Smoothing Newton's Method for 0/1 Loss Optimization
- Distributed Sparse Composite Quantile Regression in Ultrahigh Dimensions
- Noisy 1-bit compressive sensing: models and algorithms
- Byzantine-robust and efficient distributed sparsity learning: a surrogate composite quantile regression approach
- Just least squares: binary compressive sampling with low generative intrinsic dimension
- NBIHT: An Efficient Algorithm for 1-Bit Compressed Sensing With Optimal Error Decay Rate
- Distributed Decoding From Heterogeneous 1-Bit Compressive Measurements
- An algorithm solving compressive sensing problem based on maximal monotone operators
- REMI: REGRESSION WITH MARGINAL INFORMATION AND ITS APPLICATION IN GENOME-WIDE ASSOCIATION STUDIES
This page was built for publication: Robust Decoding from 1-Bit Compressive Sampling with Ordinary and Regularized Least Squares
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3174766)