Local Rademacher complexities

From MaRDI portal
Revision as of 08:13, 3 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2583411

DOI10.1214/009053605000000282zbMath1083.62034arXivmath/0508275OpenAlexW3100743579WikidataQ105584239 ScholiaQ105584239MaRDI QIDQ2583411

Olivier Bousquet, Shahar Mendelson, Bartlett, Peter L.

Publication date: 16 January 2006

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/math/0508275




Related Items (only showing first 100 items - show all)

Low-Rank Covariance Function Estimation for Multidimensional Functional DataAnalysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential EquationsDeep learning: a statistical viewpointA Statistical Learning Approach to Modal RegressionNoisy discriminant analysis with boundary assumptionsOnline Linear Programming: Dual Convergence, New Algorithms, and Regret BoundsSmooth Contextual Bandits: Bridging the Parametric and Nondifferentiable Regret RegimesGraphical Convergence of Subgradients in Nonconvex Optimization and LearningUnnamed ItemUnnamed ItemFull error analysis for the training of deep neural networksSample average approximation with heavier tails. I: Non-asymptotic bounds with weak assumptions and stochastic constraintsSample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the LassoDistributed learning for sketched kernel regressionMulti-kernel learning for multi-label classification with local Rademacher complexityRegularized learning schemes in feature Banach spacesPAC-learning with approximate predictorsOverall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisationMinimax rates for conditional density estimation via empirical entropyUnnamed ItemUnnamed ItemOrthogonal statistical learningRobust supervised learning with coordinate gradient descentData-adaptive discriminative feature localization with statistically guaranteed interpretationBenign Overfitting and Noisy FeaturesAsset pricing with neural networks: significance testsMetamodel construction for sensitivity analysisConcentration Inequalities for Samples without ReplacementLearning with Convex Loss and Indefinite KernelsRefined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert SpacesKernelized Elastic Net Regularization: Generalization Bounds, and Sparse RecoveryU-Processes and Preference LearningUnnamed ItemUnnamed ItemUnnamed ItemUnnamed ItemA moment-matching approach to testable learning and a new characterization of Rademacher complexityVariance-based regularization with convex objectivesComments on: Support vector machines maximizing geometric margins for multi-class classificationStatistics of Robust Optimization: A Generalized Empirical Likelihood ApproachCommentFAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTIONUnnamed ItemUnnamed ItemLearning rates for partially linear support vector machine in high dimensionsUnnamed ItemEstimating Individualized Treatment Rules Using Outcome Weighted LearningLearning models with uniform performance via distributionally robust optimizationGeneralization bounds for non-stationary mixing processesOnline regularized learning with pairwise loss functionsFast rates by transferring from auxiliary hypothesesOn the optimal estimation of probability measures in weak and strong topologiesConsistency analysis of an empirical minimum error entropy algorithmTikhonov, Ivanov and Morozov regularization for support vector machine learningA reproducing kernel Hilbert space approach to high dimensional partially varying coefficient modelInfluence diagnostics in support vector machinesSparsity in penalized empirical risk minimizationEmpirical variance minimization with applications in variance reduction and optimal controlOn nonparametric classification with missing covariatesLocal Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)Smooth sparse coding via marginal regression for learning sparse representationsRegularization in kernel learningComplexity of pattern classes and the Lipschitz propertyStatistical properties of kernel principal component analysisModel selection by bootstrap penalization for classificationOptimal dyadic decision treesLearning without concentration for general loss functionsFast learning rate of non-sparse multiple kernel learning and optimal regularization strategiesLocalization of VC classes: beyond local Rademacher complexitiesFast rates of minimum error entropy with heavy-tailed noiseEstimation of partially conditional average treatment effect by double kernel-covariate balancingMulti-kernel regularized classifiersInverse statistical learningLocal Rademacher complexity: sharper risk bounds with and without unlabeled samplesRobust statistical learning with Lipschitz and convex loss functionsCompressive statistical learning with random feature momentsA unified penalized method for sparse additive quantile models: an RKHS approachConvergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesicsHandling concept drift via model reuseConvergence of online pairwise regression learning with quadratic lossModel selection in reinforcement learningStatistical performance of support vector machinesNonasymptotic upper bounds for the reconstruction error of PCAOn mean estimation for heteroscedastic random variablesRobustness and generalizationFrom Gauss to Kolmogorov: localized measures of complexity for ellipsesNonparametric distributed learning under general designsBootstrap model selection for possibly dependent and heterogeneous dataConcentration estimates for learning with unbounded samplingNewton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic ConvergenceNonasymptotic analysis of robust regression with modified Huber's lossEstimating conditional quantiles with the help of the pinball lossOn the empirical estimation of integral probability metricsOptimal model selection in heteroscedastic regression using piecewise polynomial functionsModel selection by resampling penalizationPenalized empirical risk minimization over Besov spacesERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labelsMargin-adaptive model selection in statistical learningFast learning from \(\alpha\)-mixing observationsTransfer bounds for linear feature learning




Cites Work




This page was built for publication: Local Rademacher complexities