Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach
From MaRDI portal
Publication:2989476
DOI10.1109/TIT.2012.2207945zbMath1364.94153arXiv1202.1212OpenAlexW2964322027MaRDI QIDQ2989476
Publication date: 8 June 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1202.1212
Convex programming (90C25) Applications of mathematical programming (90C90) Signal theory (characterization, reconstruction, filtering, etc.) (94A12) Information theory (general) (94A15)
Related Items (60)
A Simple Tool for Bounding the Deviation of Random Matrices on Geometric Sets ⋮ Sigma delta quantization with harmonic frames and partial Fourier ensembles ⋮ Robust Decoding from 1-Bit Compressive Sampling with Ordinary and Regularized Least Squares ⋮ Quantization and Compressive Sensing ⋮ A one-bit, comparison-based gradient estimator ⋮ Phase retrieval by binary questions: which complementary subspace is closer? ⋮ Convergence guarantee for the sparse monotone single index model ⋮ \(\ell^1\)-analysis minimization and generalized (co-)sparsity: when does recovery succeed? ⋮ Generalizing CoSaMP to signals from a union of low dimensional linear subspaces ⋮ Fast and Reliable Parameter Estimation from Nonlinear Observations ⋮ Sigma Delta Quantization for Images ⋮ A theory of capacity and sparse neural encoding ⋮ A unified approach to uniform signal recovery from nonlinear observations ⋮ Robust one-bit compressed sensing with partial circulant matrices ⋮ Statistical Inference for High-Dimensional Generalized Linear Models With Binary Outcomes ⋮ Noisy 1-bit compressive sensing: models and algorithms ⋮ Just least squares: binary compressive sampling with low generative intrinsic dimension ⋮ Sparse recovery from saturated measurements ⋮ Representation and coding of signal geometry ⋮ Time for dithering: fast and quantized random embeddings via the restricted isometry property ⋮ High-dimensional estimation with geometric constraints: Table 1. ⋮ One-bit sensing, discrepancy and Stolarsky's principle ⋮ Flavors of Compressive Sensing ⋮ One-bit compressed sensing with non-Gaussian measurements ⋮ Linear regression with sparsely permuted data ⋮ Structure from Randomness in Halfspace Learning with the Zero-One Loss ⋮ An Introduction to Compressed Sensing ⋮ Quantized Compressed Sensing: A Survey ⋮ Classification Scheme for Binary Data with Extensions ⋮ Applied harmonic analysis and data processing. Abstracts from the workshop held March 25--31, 2018 ⋮ Fast binary embeddings with Gaussian circulant matrices: improved bounds ⋮ The landscape of empirical risk for nonconvex losses ⋮ The recovery of ridge functions on the hypercube suffers from the curse of dimensionality ⋮ Estimation from nonlinear observations via convex programming with application to bilinear regression ⋮ On recovery guarantees for one-bit compressed sensing on manifolds ⋮ Gelfand numbers related to structured sparsity and Besov space embeddings with small mixed smoothness ⋮ An extended Newton-type algorithm for \(\ell_2\)-regularized sparse logistic regression and its efficiency for classifying large-scale datasets ⋮ Estimation in High Dimensions: A Geometric Perspective ⋮ Non-Gaussian hyperplane tessellations and robust one-bit compressed sensing ⋮ Double fused Lasso regularized regression with both matrix and vector valued predictors ⋮ Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization ⋮ Simple Classification using Binary Data ⋮ Endpoint Results for Fourier Integral Operators on Noncompact Symmetric Spaces ⋮ On the Atomic Decomposition of Coorbit Spaces with Non-integrable Kernel ⋮ Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach ⋮ Sparse classification: a scalable discrete optimization perspective ⋮ Characterization of ℓ1 minimizer in one-bit compressed sensing ⋮ Generalized high-dimensional trace regression via nuclear norm regularization ⋮ Least squares estimation in the monotone single index model ⋮ Global and Simultaneous Hypothesis Testing for High-Dimensional Logistic Regression Models ⋮ Unnamed Item ⋮ One-bit compressed sensing via ℓ p (0 < p < 1)-minimization method ⋮ Hypothesis testing for high-dimensional sparse binary regression ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Classification of COVID19 Patients using robust logistic regression ⋮ On the Convergence Rate of Projected Gradient Descent for a Back-Projection Based Objective ⋮ AdaBoost and robust one-bit compressed sensing ⋮ Gradient projection Newton algorithm for sparse collaborative learning using synthetic and real datasets of applications ⋮ Covariance estimation under one-bit quantization
This page was built for publication: Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach