Learning without Concentration

From MaRDI portal
Publication:2796408

DOI10.1145/2699439zbMath1333.68232arXiv1401.0304OpenAlexW2103775046MaRDI QIDQ2796408

Shahar Mendelson

Publication date: 24 March 2016

Published in: Journal of the ACM (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1401.0304



Related Items

On least squares estimation under heteroscedastic and heavy-tailed errors, Generalization bounds for non-stationary mixing processes, On aggregation for heavy-tailed classes, Performance of empirical risk minimization in linear aggregation, Aggregated hold out for sparse linear regression with a robust loss function, Simpler PAC-Bayesian bounds for hostile data, Learning without concentration for general loss functions, On the geometry of polytopes generated by heavy-tailed random vectors, Upper bounds on product and multiplier empirical processes, Low rank matrix recovery from rank one measurements, Unnamed Item, Robust statistical learning with Lipschitz and convex loss functions, Posterior concentration and fast convergence rates for generalized Bayesian learning, Convergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesics, Generic error bounds for the generalized Lasso with sub-exponential data, Sample average approximation with heavier tails. I: Non-asymptotic bounds with weak assumptions and stochastic constraints, Regularization, sparse recovery, and median-of-means tournaments, Empirical risk minimization for heavy-tailed losses, Finite sample behavior of a sieve profile estimator in the single index mode, A unified approach to uniform signal recovery from nonlinear observations, Orthogonal statistical learning, Robust machine learning by median-of-means: theory and practice, Mean estimation in high dimension, On the Geometry of Random Polytopes, Robust classification via MOM minimization, Stable low-rank matrix recovery via null space properties, Approximating the covariance ellipsoid, Relative deviation learning bounds and generalization with unbounded loss functions, Optimal rates of statistical seriation, Extending the scope of the small-ball method, Complex phase retrieval from subgaussian measurements, Quantized Compressed Sensing: A Survey, Low-rank matrix recovery via rank one tight frame measurements, Unnamed Item, Thin-shell concentration for random vectors in Orlicz balls via moderate deviations and Gibbs measures, Unnamed Item, Column normalization of a random measurement matrix, Slope meets Lasso: improved oracle bounds and optimality, Regularization and the small-ball method. I: Sparse recovery, Sparse recovery under weak moment assumptions, Estimation from nonlinear observations via convex programming with application to bilinear regression, Learning from MOM's principles: Le Cam's approach, Unnamed Item, Variance-based regularization with convex objectives, Phase retrieval with PhaseLift algorithm, Approximating \(L_p\) unit balls via random sampling, Non-Gaussian hyperplane tessellations and robust one-bit compressed sensing, A MOM-based ensemble method for robustness, subsampling and hyperparameter tuning, Learning with correntropy-induced losses for regression with mixture of symmetric stable noise, Convergence rates of least squares regression estimators with heavy-tailed errors, Endpoint Results for Fourier Integral Operators on Noncompact Symmetric Spaces, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity, Solving equations of random convex functions via anchored regression, Regularization and the small-ball method II: complexity dependent error rates, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, Mean estimation and regression under heavy-tailed distributions: A survey, On Monte-Carlo methods in convex stochastic optimization, Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices, Low-Rank Matrix Estimation from Rank-One Projections by Unlifted Convex Optimization, AdaBoost and robust one-bit compressed sensing, Proof methods for robust low-rank matrix recovery, Suboptimality of constrained least squares and improvements via non-linear predictors, Distribution-free robust linear regression, Fast Convex Pruning of Deep Neural Networks



Cites Work