Bounding the Smallest Singular Value of a Random Matrix Without Concentration
From MaRDI portal
Publication:3460356
DOI10.1093/imrn/rnv096zbMath1331.15027arXiv1312.3580OpenAlexW2963441888MaRDI QIDQ3460356
Shahar Mendelson, Vladimir I. Koltchinskii
Publication date: 7 January 2016
Published in: International Mathematics Research Notices (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1312.3580
Random matrices (probabilistic aspects) (60B20) Inequalities involving eigenvalues and eigenvectors (15A42) Eigenvalues, singular values, and eigenvectors (15A18) Random matrices (algebraic aspects) (15B52)
Related Items
Coverings of random ellipsoids, and invertibility of matrices with i.i.d. heavy-tailed entries, On aggregation for heavy-tailed classes, Deep learning: a statistical viewpoint, Simultaneous Phase Retrieval and Blind Deconvolution via Convex Programming, Performance of empirical risk minimization in linear aggregation, Controlling the least eigenvalue of a random Gram matrix, On the interval of fluctuation of the singular values of random matrices, The smallest singular value of random rectangular matrices with no moment assumptions on entries, The method of perpendiculars of finding estimates from below for minimal singular eigenvalues of random matrices, Penalized least square in sparse setting with convex penalty and non Gaussian errors, The lower tail of random quadratic forms with applications to ordinary least squares, Low rank matrix recovery from rank one measurements, Generalized notions of sparsity and restricted isometry property. II: Applications, Robust statistical learning with Lipschitz and convex loss functions, On the convergence of the extremal eigenvalues of empirical covariance matrices with dependence, On delocalization of eigenvectors of random non-Hermitian matrices, High-dimensional robust regression with \(L_q\)-loss functions, Generic error bounds for the generalized Lasso with sub-exponential data, Empirical risk minimization: probabilistic complexity and stepsize strategy, Tyler's and Maronna's M-estimators: non-asymptotic concentration results, Spiked singular values and vectors under extreme aspect ratios, Robust machine learning by median-of-means: theory and practice, Random polytopes obtained by matrices with heavy-tailed entries, Non-asymptotic bounds for the \(\ell_{\infty}\) estimator in linear regression with uniform noise, Dimension-free bounds for sums of independent matrices and simple tensors via the variational principle, Robust classification via MOM minimization, Stable low-rank matrix recovery via null space properties, Complex phase retrieval from subgaussian measurements, Low-rank matrix recovery via rank one tight frame measurements, On higher order isotropy conditions and lower bounds for sparse quadratic forms, The smallest singular value of a shifted $d$-regular random square matrix, The limit of the smallest singular value of random matrices with i.i.d. entries, Unnamed Item, Slope meets Lasso: improved oracle bounds and optimality, Simplicial faces of the set of correlation matrices, Regularization and the small-ball method. I: Sparse recovery, Sparse recovery under weak moment assumptions, Estimation from nonlinear observations via convex programming with application to bilinear regression, Learning from MOM's principles: Le Cam's approach, An upper bound on the smallest singular value of a square random matrix, Learning without Concentration, Phase retrieval with PhaseLift algorithm, Approximating \(L_p\) unit balls via random sampling, Solving equations of random convex functions via anchored regression, The smallest singular value of heavy-tailed not necessarily i.i.d. random matrices via random rounding, Regularization and the small-ball method II: complexity dependent error rates, Preserving injectivity under subgaussian mappings and its application to compressed sensing, On Monte-Carlo methods in convex stochastic optimization, Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices, On the robustness of minimum norm interpolators and regularized empirical risk minimizers, Low-Rank Matrix Estimation from Rank-One Projections by Unlifted Convex Optimization, Proof methods for robust low-rank matrix recovery, Robust Width: A Characterization of Uniformly Stable and Robust Compressed Sensing