Learning without Concentration

From MaRDI portal
Revision as of 17:03, 3 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2796408

DOI10.1145/2699439zbMath1333.68232arXiv1401.0304OpenAlexW2103775046MaRDI QIDQ2796408

Shahar Mendelson

Publication date: 24 March 2016

Published in: Journal of the ACM (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1401.0304






Related Items (68)

On least squares estimation under heteroscedastic and heavy-tailed errorsGeneralization bounds for non-stationary mixing processesOn aggregation for heavy-tailed classesPerformance of empirical risk minimization in linear aggregationAggregated hold out for sparse linear regression with a robust loss functionSimpler PAC-Bayesian bounds for hostile dataLearning without concentration for general loss functionsOn the geometry of polytopes generated by heavy-tailed random vectorsUpper bounds on product and multiplier empirical processesLow rank matrix recovery from rank one measurementsUnnamed ItemRobust statistical learning with Lipschitz and convex loss functionsPosterior concentration and fast convergence rates for generalized Bayesian learningConvergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesicsGeneric error bounds for the generalized Lasso with sub-exponential dataSample average approximation with heavier tails. I: Non-asymptotic bounds with weak assumptions and stochastic constraintsRegularization, sparse recovery, and median-of-means tournamentsEmpirical risk minimization for heavy-tailed lossesFinite sample behavior of a sieve profile estimator in the single index modeA unified approach to uniform signal recovery from nonlinear observationsOrthogonal statistical learningRobust machine learning by median-of-means: theory and practiceMean estimation in high dimensionOn the Geometry of Random PolytopesRobust classification via MOM minimizationStable low-rank matrix recovery via null space propertiesApproximating the covariance ellipsoidRelative deviation learning bounds and generalization with unbounded loss functionsOptimal rates of statistical seriationExtending the scope of the small-ball methodComplex phase retrieval from subgaussian measurementsQuantized Compressed Sensing: A SurveyLow-rank matrix recovery via rank one tight frame measurementsStable recovery and the coordinate small-ball behaviour of random vectorsUnnamed ItemThin-shell concentration for random vectors in Orlicz balls via moderate deviations and Gibbs measuresUnnamed ItemColumn normalization of a random measurement matrixSlope meets Lasso: improved oracle bounds and optimalityRegularization and the small-ball method. I: Sparse recoverySparse recovery under weak moment assumptionsEstimation from nonlinear observations via convex programming with application to bilinear regressionLearning from MOM's principles: Le Cam's approachUnnamed ItemVariance-based regularization with convex objectivesPhase retrieval with PhaseLift algorithmApproximating \(L_p\) unit balls via random samplingNon-Gaussian hyperplane tessellations and robust one-bit compressed sensingThe geometric median and applications to robust mean estimationA MOM-based ensemble method for robustness, subsampling and hyperparameter tuningEmpirical risk minimization for time series: nonparametric performance bounds for predictionLearning with correntropy-induced losses for regression with mixture of symmetric stable noiseConvergence rates of least squares regression estimators with heavy-tailed errorsEndpoint Results for Fourier Integral Operators on Noncompact Symmetric SpacesLow-rank matrix recovery with composite optimization: good conditioning and rapid convergenceStochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and AdaptivitySolving equations of random convex functions via anchored regressionRegularization and the small-ball method II: complexity dependent error ratesLocalized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregationMean estimation and regression under heavy-tailed distributions: A surveyOn Monte-Carlo methods in convex stochastic optimizationExact minimax risk for linear least squares, and the lower tail of sample covariance matricesLow-Rank Matrix Estimation from Rank-One Projections by Unlifted Convex OptimizationAdaBoost and robust one-bit compressed sensingProof methods for robust low-rank matrix recoverySuboptimality of constrained least squares and improvements via non-linear predictorsDistribution-free robust linear regressionFast Convex Pruning of Deep Neural Networks




Cites Work




This page was built for publication: Learning without Concentration