Local Rademacher complexities
From MaRDI portal
Publication:2583411
DOI10.1214/009053605000000282zbMath1083.62034arXivmath/0508275OpenAlexW3100743579WikidataQ105584239 ScholiaQ105584239MaRDI QIDQ2583411
Olivier Bousquet, Shahar Mendelson, Bartlett, Peter L.
Publication date: 16 January 2006
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0508275
Nonparametric regression and quantile regression (62G08) Computational learning theory (68Q32) Analysis of algorithms and problem complexity (68Q25) Complexity and performance of numerical algorithms (65Y20)
Related Items (only showing first 100 items - show all)
Low-Rank Covariance Function Estimation for Multidimensional Functional Data ⋮ Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations ⋮ Deep learning: a statistical viewpoint ⋮ A Statistical Learning Approach to Modal Regression ⋮ Noisy discriminant analysis with boundary assumptions ⋮ Online Linear Programming: Dual Convergence, New Algorithms, and Regret Bounds ⋮ Smooth Contextual Bandits: Bridging the Parametric and Nondifferentiable Regret Regimes ⋮ Graphical Convergence of Subgradients in Nonconvex Optimization and Learning ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Full error analysis for the training of deep neural networks ⋮ Sample average approximation with heavier tails. I: Non-asymptotic bounds with weak assumptions and stochastic constraints ⋮ Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso ⋮ Distributed learning for sketched kernel regression ⋮ Multi-kernel learning for multi-label classification with local Rademacher complexity ⋮ Regularized learning schemes in feature Banach spaces ⋮ PAC-learning with approximate predictors ⋮ Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation ⋮ Minimax rates for conditional density estimation via empirical entropy ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Orthogonal statistical learning ⋮ Robust supervised learning with coordinate gradient descent ⋮ Data-adaptive discriminative feature localization with statistically guaranteed interpretation ⋮ Benign Overfitting and Noisy Features ⋮ Asset pricing with neural networks: significance tests ⋮ Metamodel construction for sensitivity analysis ⋮ Concentration Inequalities for Samples without Replacement ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ Refined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert Spaces ⋮ Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery ⋮ U-Processes and Preference Learning ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Unnamed Item ⋮ A moment-matching approach to testable learning and a new characterization of Rademacher complexity ⋮ Variance-based regularization with convex objectives ⋮ Comments on: Support vector machines maximizing geometric margins for multi-class classification ⋮ Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach ⋮ Comment ⋮ FAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTION ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Learning rates for partially linear support vector machine in high dimensions ⋮ Unnamed Item ⋮ Estimating Individualized Treatment Rules Using Outcome Weighted Learning ⋮ Learning models with uniform performance via distributionally robust optimization ⋮ Generalization bounds for non-stationary mixing processes ⋮ Online regularized learning with pairwise loss functions ⋮ Fast rates by transferring from auxiliary hypotheses ⋮ On the optimal estimation of probability measures in weak and strong topologies ⋮ Consistency analysis of an empirical minimum error entropy algorithm ⋮ Tikhonov, Ivanov and Morozov regularization for support vector machine learning ⋮ A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model ⋮ Influence diagnostics in support vector machines ⋮ Sparsity in penalized empirical risk minimization ⋮ Empirical variance minimization with applications in variance reduction and optimal control ⋮ On nonparametric classification with missing covariates ⋮ Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder) ⋮ Smooth sparse coding via marginal regression for learning sparse representations ⋮ Regularization in kernel learning ⋮ Complexity of pattern classes and the Lipschitz property ⋮ Statistical properties of kernel principal component analysis ⋮ Model selection by bootstrap penalization for classification ⋮ Optimal dyadic decision trees ⋮ Learning without concentration for general loss functions ⋮ Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies ⋮ Localization of VC classes: beyond local Rademacher complexities ⋮ Fast rates of minimum error entropy with heavy-tailed noise ⋮ Estimation of partially conditional average treatment effect by double kernel-covariate balancing ⋮ Multi-kernel regularized classifiers ⋮ Inverse statistical learning ⋮ Local Rademacher complexity: sharper risk bounds with and without unlabeled samples ⋮ Robust statistical learning with Lipschitz and convex loss functions ⋮ Compressive statistical learning with random feature moments ⋮ A unified penalized method for sparse additive quantile models: an RKHS approach ⋮ Convergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesics ⋮ Handling concept drift via model reuse ⋮ Convergence of online pairwise regression learning with quadratic loss ⋮ Model selection in reinforcement learning ⋮ Statistical performance of support vector machines ⋮ Nonasymptotic upper bounds for the reconstruction error of PCA ⋮ On mean estimation for heteroscedastic random variables ⋮ Robustness and generalization ⋮ From Gauss to Kolmogorov: localized measures of complexity for ellipses ⋮ Nonparametric distributed learning under general designs ⋮ Bootstrap model selection for possibly dependent and heterogeneous data ⋮ Concentration estimates for learning with unbounded sampling ⋮ Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence ⋮ Nonasymptotic analysis of robust regression with modified Huber's loss ⋮ Estimating conditional quantiles with the help of the pinball loss ⋮ On the empirical estimation of integral probability metrics ⋮ Optimal model selection in heteroscedastic regression using piecewise polynomial functions ⋮ Model selection by resampling penalization ⋮ Penalized empirical risk minimization over Besov spaces ⋮ ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels ⋮ Margin-adaptive model selection in statistical learning ⋮ Fast learning from \(\alpha\)-mixing observations ⋮ Transfer bounds for linear feature learning
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A new approach to least-squares estimation, with applications
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Sharper bounds for Gaussian and empirical processes
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- Advanced lectures on machine learning. Machine learning summer school 2002, Canberra, Australia, February 11--22, 2002. Revised lectures
- Concentration inequalities using the entropy method
- Smooth discrimination analysis
- A Bennett concentration inequality and its application to suprema of empirical processes
- A distribution-free theory of nonparametric regression
- Une inégalité de Bennett pour les maxima de processus empiriques. (A Bennet type inequality for maxima of empirical processes)
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Complexity regularization via localized random penalties
- Weak convergence and empirical processes. With applications to statistics
- Empirical minimization
- Uniform Central Limit Theorems
- Asymptotic Statistics
- A sharp concentration inequality with applications
- Rademacher penalties and structural risk minimization
- Rademacher averages and phase transitions in Glivenko-Cantelli classes
- Improving the sample complexity using global data
- The importance of convexity in learning with squared loss
- 10.1162/153244303321897690
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Convexity, Classification, and Risk Bounds
- Convergence of stochastic processes
- Some applications of concentration inequalities to statistics
- Model selection and error estimation
This page was built for publication: Local Rademacher complexities