Bounding the Smallest Singular Value of a Random Matrix Without Concentration

From MaRDI portal
Revision as of 20:45, 4 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:3460356

DOI10.1093/IMRN/RNV096zbMath1331.15027arXiv1312.3580OpenAlexW2963441888MaRDI QIDQ3460356

Shahar Mendelson, Vladimir I. Koltchinskii

Publication date: 7 January 2016

Published in: International Mathematics Research Notices (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1312.3580




Related Items (53)

Coverings of random ellipsoids, and invertibility of matrices with i.i.d. heavy-tailed entriesOn aggregation for heavy-tailed classesDeep learning: a statistical viewpointSimultaneous Phase Retrieval and Blind Deconvolution via Convex ProgrammingPerformance of empirical risk minimization in linear aggregationControlling the least eigenvalue of a random Gram matrixOn the interval of fluctuation of the singular values of random matricesThe smallest singular value of random rectangular matrices with no moment assumptions on entriesThe method of perpendiculars of finding estimates from below for minimal singular eigenvalues of random matricesPenalized least square in sparse setting with convex penalty and non Gaussian errorsThe lower tail of random quadratic forms with applications to ordinary least squaresLow rank matrix recovery from rank one measurementsGeneralized notions of sparsity and restricted isometry property. II: ApplicationsRobust statistical learning with Lipschitz and convex loss functionsOn the convergence of the extremal eigenvalues of empirical covariance matrices with dependenceOn delocalization of eigenvectors of random non-Hermitian matricesHigh-dimensional robust regression with \(L_q\)-loss functionsGeneric error bounds for the generalized Lasso with sub-exponential dataEmpirical risk minimization: probabilistic complexity and stepsize strategyTyler's and Maronna's M-estimators: non-asymptotic concentration resultsSpiked singular values and vectors under extreme aspect ratiosRobust machine learning by median-of-means: theory and practiceRandom polytopes obtained by matrices with heavy-tailed entriesNon-asymptotic bounds for the \(\ell_{\infty}\) estimator in linear regression with uniform noiseDimension-free bounds for sums of independent matrices and simple tensors via the variational principleRobust classification via MOM minimizationStable low-rank matrix recovery via null space propertiesComplex phase retrieval from subgaussian measurementsLow-rank matrix recovery via rank one tight frame measurementsOn higher order isotropy conditions and lower bounds for sparse quadratic formsThe smallest singular value of a shifted $d$-regular random square matrixThe limit of the smallest singular value of random matrices with i.i.d. entriesUnnamed ItemSlope meets Lasso: improved oracle bounds and optimalitySimplicial faces of the set of correlation matricesRegularization and the small-ball method. I: Sparse recoverySparse recovery under weak moment assumptionsEstimation from nonlinear observations via convex programming with application to bilinear regressionLearning from MOM's principles: Le Cam's approachAn upper bound on the smallest singular value of a square random matrixLearning without ConcentrationPhase retrieval with PhaseLift algorithmApproximating \(L_p\) unit balls via random samplingSolving equations of random convex functions via anchored regressionThe smallest singular value of heavy-tailed not necessarily i.i.d. random matrices via random roundingRegularization and the small-ball method II: complexity dependent error ratesPreserving injectivity under subgaussian mappings and its application to compressed sensingOn Monte-Carlo methods in convex stochastic optimizationExact minimax risk for linear least squares, and the lower tail of sample covariance matricesOn the robustness of minimum norm interpolators and regularized empirical risk minimizersLow-Rank Matrix Estimation from Rank-One Projections by Unlifted Convex OptimizationProof methods for robust low-rank matrix recoveryRobust Width: A Characterization of Uniformly Stable and Robust Compressed Sensing







This page was built for publication: Bounding the Smallest Singular Value of a Random Matrix Without Concentration