Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements

From MaRDI portal
Publication:5280996


DOI10.1109/TIT.2011.2111771zbMath1366.90160MaRDI QIDQ5280996

Emmanuel J. Candès, Yaniv Plan

Publication date: 27 July 2017

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1109/tit.2011.2111771


90C25: Convex programming

94A12: Signal theory (characterization, reconstruction, filtering, etc.)


Related Items

Unnamed Item, Unnamed Item, Time for dithering: fast and quantized random embeddings via the restricted isometry property, High-dimensional estimation with geometric constraints: Table 1., Near-optimal estimation of simultaneously sparse and low-rank matrices from nested linear measurements, Stable low-rank matrix recovery via null space properties, On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition, Regularization and the small-ball method II: complexity dependent error rates, ELASTIC-NET REGULARIZATION FOR LOW-RANK MATRIX RECOVERY, Geometric median and robust estimation in Banach spaces, On two continuum armed bandit problems in high dimensions, Solving variational inequalities with monotone operators on domains given by linear minimization oracles, Two-stage convex relaxation approach to least squares loss constrained low-rank plus sparsity optimization problems, Optimal large-scale quantum state tomography with Pauli measurements, Matrix completion via max-norm constrained optimization, Geometric inference for general high-dimensional linear inverse problems, Estimation of low rank density matrices: bounds in Schatten norms and other distances, Sharp MSE bounds for proximal denoising, Low rank matrix recovery from rank one measurements, Low rank estimation of smooth kernels on graphs, The bounds of restricted isometry constants for low rank matrices recovery, \(s\)-goodness for low-rank matrix recovery, Simple bounds for recovering low-complexity models, Uniqueness conditions for low-rank matrix recovery, Von Neumann entropy penalization and low-rank matrix estimation, Guaranteed clustering and biclustering via semidefinite programming, Decomposable norm minimization with proximal-gradient homotopy algorithm, Dimensionality reduction with subgaussian matrices: a unified theory, Estimation of high-dimensional low-rank matrices, Estimation of (near) low-rank matrices with noise and high-dimensional scaling, Optimal selection of reduced rank estimators of high-dimensional matrices, Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion, Joint variable and rank selection for parsimonious estimation of high-dimensional matrices, A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery, Stability of the elastic net estimator, On signal detection and confidence sets for low rank inference problems, A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery, Signal recovery under cumulative coherence, Trace regression model with simultaneously low rank and row(column) sparse parameter, The minimal measurement number for low-rank matrix recovery, Adaptive confidence sets for matrix completion, Painless breakups -- efficient demixing of low rank matrices, Cross: efficient low-rank tensor completion, Signal recovery under mutual incoherence property and oracle inequalities, Learning semidefinite regularizers, Regularization and the small-ball method. I: Sparse recovery, Equivalent Lipschitz surrogates for zero-norm and rank optimization problems, Templates for convex cone problems with applications to sparse signal recovery, On the exponentially weighted aggregate with the Laplace prior, The convex geometry of linear inverse problems, Rank penalized estimators for high-dimensional matrices, Low-rank matrix recovery via regularized nuclear norm minimization, Oracle posterior contraction rates under hierarchical priors, Tensor theta norms and low rank recovery, Error bound of critical points and KL property of exponent 1/2 for squared F-norm regularized factorization, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, Spectral thresholding for the estimation of Markov chain transition operators, An optimal statistical and computational framework for generalized tensor estimation, Tight risk bound for high dimensional time series completion, An inexact proximal DC algorithm with sieving strategy for rank constrained least squares semidefinite programming, On the robustness of noise-blind low-rank recovery from rank-one measurements, Guarantees of Riemannian optimization for low rank matrix completion, High-dimensional VAR with low-rank transition, A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery, Theoretical investigation of generalization bounds for adversarial learning of deep neural networks, Sharp RIP bound for sparse signal and low-rank matrix recovery, Convergence of projected Landweber iteration for matrix rank minimization, Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO, RIP-based performance guarantee for low-tubal-rank tensor recovery, Approximation of generalized ridge functions in high dimensions, Truncated sparse approximation property and truncated \(q\)-norm minimization, Non-intrusive tensor reconstruction for high-dimensional random PDEs, Optimal RIP bounds for sparse signals recovery via \(\ell_p\) minimization, ROP: matrix recovery via rank-one projections, Stable recovery of low rank matrices from nuclear norm minimization, Stable recovery of analysis based approaches, Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery, Low rank tensor recovery via iterative hard thresholding, Iterative hard thresholding for low-rank recovery from rank-one projections, RIPless compressed sensing from anisotropic measurements, On a unified view of nullspace-type conditions for recoveries associated with general sparsity structures, Asymptotic equivalence of quantum state tomography and noisy matrix completion, Convergence analysis of projected gradient descent for Schatten-\(p\) nonconvex matrix recovery, Stable recovery of low-rank matrix via nonconvex Schatten \(p\)-minimization, On the Schatten \(p\)-quasi-norm minimization for low-rank matrix recovery, Robust recovery of low-rank matrices with non-orthogonal sparse decomposition from incomplete measurements, Terracini convexity, Regularized sample average approximation for high-dimensional stochastic optimization under low-rankness, Adaptive iterative hard thresholding for low-rank matrix recovery and rank-one measurements, Sharp variable selection of a sparse submatrix in a high-dimensional noisy matrix, Low Complexity Regularization of Linear Inverse Problems, Recovery of Low Rank Symmetric Matrices via Schatten p Norm Minimization, Guarantees of Riemannian Optimization for Low Rank Matrix Recovery, Low Rank Estimation of Similarities on Graphs, EXACT LOW-RANK MATRIX RECOVERY VIA NONCONVEX SCHATTEN p-MINIMIZATION, On Cross-Validation for Sparse Reduced Rank Regression, Truncated $l_{1-2}$ Models for Sparse Recovery and Rank Minimization, Median-Truncated Gradient Descent: A Robust and Scalable Nonconvex Approach for Signal Estimation, Sparse Model Uncertainties in Compressed Sensing with Application to Convolutions and Sporadic Communication, Tensor Completion in Hierarchical Tensor Representations, Unnamed Item, Unnamed Item, Structured random measurements in signal processing, Isolated calmness of solution mappings and exact recovery conditions for nuclear norm optimization problems, Perturbation analysis of low-rank matrix stable recovery, Unnamed Item, ISLET: Fast and Optimal Low-Rank Tensor Regression via Importance Sketching, Low rank matrix recovery with adversarial sparse noise*, Tensor Regression Using Low-Rank and Sparse Tucker Decompositions, WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions, An Unbiased Approach to Low Rank Recovery, Unnamed Item, Column $\ell_{2,0}$-Norm Regularized Factorization Model of Low-Rank Matrix Recovery and Its Computation, High-dimensional dynamic systems identification with additional constraints, Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators, Persistent homology for low-complexity models, Nonconvex Robust Low-Rank Matrix Recovery, Jointly low-rank and bisparse recovery: Questions and partial answers, Minimization of the difference of Nuclear and Frobenius norms for noisy low rank matrix recovery, Finding Low-Rank Solutions via Nonconvex Matrix Factorization, Efficiently and Provably, Nonuniform recovery of fusion frame structured sparse signals, Sharp Restricted Isometry Bounds for the Inexistence of Spurious Local Minima in Nonconvex Matrix Recovery, Multistage Convex Relaxation Approach to Rank Regularized Minimization Problems Based on Equivalent Mathematical Program with a Generalized Complementarity Constraint, Recovery of low-rank matrices based on the rank null space properties, Weighted lp − l1 minimization methods for block sparse recovery and rank minimization, An Exact and Robust Conformal Inference Method for Counterfactual and Synthetic Controls, Improved RIP-based bounds for guaranteed performance of two compressed sensing algorithms, Iterative hard thresholding for low CP-rank tensor models, High-dimensional latent panel quantile regression with an application to asset pricing, Robust sensing of low-rank matrices with non-orthogonal sparse decomposition, Neural network approximation of continuous functions in high dimensions with applications to inverse problems, Block-sparse recovery and rank minimization using a weighted \(l_p-l_q\) model, Moderate deviations in cycle count, Rate-optimal robust estimation of high-dimensional vector autoregressive models, A singular value shrinkage thresholding algorithm for folded concave penalized low-rank matrix optimization problems, Entrywise limit theorems for eigenvectors of signal-plus-noise matrix models with weak signals