Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
DOI10.1214/009053606000001019zbMATH Open1118.62065arXiv0708.0083OpenAlexW3105849782WikidataQ105584237 ScholiaQ105584237MaRDI QIDQ2373576FDOQ2373576
Authors: Vladimir Koltchinskii
Publication date: 12 July 2007
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0708.0083
Recommendations
Nonparametric regression and quantile regression (62G08) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Pattern recognition, speech recognition (68T10) Computational learning theory (68Q32) Probability theory on algebraic and topological structures (60B99)
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Title not available (Why is that?)
- New concentration inequalities in product spaces
- Title not available (Why is that?)
- Risk bounds for model selection via penalization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Sharper bounds for Gaussian and empirical processes
- Smooth discrimination analysis
- A Bennett concentration inequality and its application to suprema of empirical processes
- A distribution-free theory of nonparametric regression
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Local Rademacher complexities
- Uniform Central Limit Theorems
- Improving the sample complexity using global data
- Title not available (Why is that?)
- Some applications of concentration inequalities to statistics
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Convexity, Classification, and Risk Bounds
- Concentration inequalities and asymptotic results for ratio type empirical processes
- Title not available (Why is that?)
- An empirical process approach to the uniform consistency of kernel-type function estimators
- On consistency of kernel density estimators for randomly censored data: Rates holding uniformly over adaptive intervals
- Some limit theorems for empirical processes (with discussion)
- Convergence rate of sieve estimates
- Moment inequalities for functions of independent random variables
- Optimal aggregation of classifiers in statistical learning.
- Rademacher penalties and structural risk minimization
- Title not available (Why is that?)
- Neural Network Learning
- Model selection for regression on a random design
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- A new look at independence
- Statistical performance of support vector machines
- Title not available (Why is that?)
- Efficient agnostic learning of neural networks with bounded fan-in
- Title not available (Why is that?)
- Empirical minimization
- Square root penalty: Adaption to the margin in classification and in edge estimation
- On the Bayes-risk consistency of regularized boosting methods.
- Model selection and error estimation
- Inequalities for uniform deviations of averages from expectations with applications to nonparametric regression
- A sharp concentration inequality with applications
- Complexity regularization via localized random penalties
- Complexities of convex combinations and bounding the generalization error in classification
- 10.1162/1532443041424319
- Oracle inequalities and nonparametric function estimation
- Left concentration inequalities for empirical processes
- Title not available (Why is that?)
- Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins.
Cited In (only showing first 100 items - show all)
- A universal procedure for aggregating estimators
- Bayesian fractional posteriors
- Learning Theory
- Rademacher penalties and structural risk minimization
- Parametric or nonparametric? A parametricness index for model selection
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- Local learning estimates by integral operators
- Fast learning rates in statistical inference through aggregation
- Sampling and empirical risk minimization
- Rho-estimators revisited: general theory and applications
- Tests and estimation strategies associated to some loss functions
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- A no-free-lunch theorem for multitask learning
- Model selection by resampling penalization
- A high-dimensional Wilks phenomenon
- Empirical minimization
- Oracle inequalities for cross-validation type procedures
- A new method for estimation and model selection: \(\rho\)-estimation
- Singularity, misspecification and the convergence rate of EM
- Convergence rates of least squares regression estimators with heavy-tailed errors
- Square root penalty: Adaption to the margin in classification and in edge estimation
- Adaptive estimation of a distribution function and its density in sup-norm loss by wavelet and spline projections
- Sharper lower bounds on the performance of the empirical risk minimization algorithm
- Global uniform risk bounds for wavelet deconvolution estimators
- Risk bounds for CART classifiers under a margin condition
- Margin-adaptive model selection in statistical learning
- A local Vapnik-Chervonenkis complexity
- Nonasymptotic bounds for vector quantization in Hilbert spaces
- Aggregation for Gaussian regression
- Fast learning rates for plug-in classifiers
- Optimal survey schemes for stochastic gradient descent with applications to M-estimation
- Local Rademacher complexities
- Random design analysis of ridge regression
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- Rademacher complexity for Markov chains: applications to kernel smoothing and Metropolis-Hastings
- An elementary analysis of ridge regression with random design
- Compressive statistical learning with random feature moments
- Concentration inequalities and confidence bands for needlet density estimators on compact homogeneous manifolds
- Title not available (Why is that?)
- Complexity regularization via localized random penalties
- Sparse recovery in convex hulls via entropy penalization
- Approximation properties of certain operator-induced norms on Hilbert spaces
- Ranking and empirical minimization of \(U\)-statistics
- Obtaining fast error rates in nonconvex situations
- Nonparametric estimation of low rank matrix valued function
- Concentration inequalities and asymptotic results for ratio type empirical processes
- U-Processes and Preference Learning
- Statistical properties of kernel principal component analysis
- Rates of convergence in active learning
- Optimal exponential bounds on the accuracy of classification
- On the optimality of the aggregate with exponential weights for low temperatures
- Fast rates for empirical vector quantization
- Empirical risk minimization is optimal for the convex aggregation problem
- Empirical risk minimization for heavy-tailed losses
- Adaptive kernel methods using the balancing principle
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples
- General oracle inequalities for model selection
- Inverse statistical learning
- Minimax fast rates for discriminant analysis with errors in variables
- General nonexact oracle inequalities for classes with a subexponential envelope
- FAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTION
- Simultaneous adaptation to the margin and to complexity in classification
- Statistical performance of support vector machines
- The two-sample problem for Poisson processes: adaptive tests with a nonasymptotic wild bootstrap approach
- A statistical view of clustering performance through the theory of \(U\)-processes
- Complexities of convex combinations and bounding the generalization error in classification
- Direct importance estimation for covariate shift adaptation
- Variance-based regularization with convex objectives
- Sparsity in penalized empirical risk minimization
- Regularization in kernel learning
- 10.1162/153244303321897690
- HONEST CONFIDENCE SETS IN NONPARAMETRIC IV REGRESSION AND OTHER ILL-POSED MODELS
- Theory of Classification: a Survey of Some Recent Advances
- Model selection by bootstrap penalization for classification
- Convergence rates for shallow neural networks learned by gradient descent
- Nonparametric regression using deep neural networks with ReLU activation function
- Aggregation of estimators and stochastic optimization
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
- Suboptimality of constrained least squares and improvements via non-linear predictors
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Relative deviation learning bounds and generalization with unbounded loss functions
- On least squares estimation under heteroscedastic and heavy-tailed errors
- Sample average approximation with heavier tails. I: Non-asymptotic bounds with weak assumptions and stochastic constraints
- Performance guarantees for policy learning
- Convergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesics
- Optimal linear discriminators for the discrete choice model in growing dimensions
- Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso
- Concentration inequalities for two-sample rank processes with application to bipartite ranking
- Title not available (Why is that?)
- Robust multicategory support vector machines using difference convex algorithm
- Wild bootstrap inference for penalized quantile regression for longitudinal data
- Title not available (Why is that?)
- ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Empirical variance minimization with applications in variance reduction and optimal control
- Locally simultaneous inference
- Statistical inference using regularized M-estimation in the reproducing kernel Hilbert space for handling missing data
This page was built for publication: Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2373576)