Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
DOI10.1007/978-3-642-22147-7zbMATH Open1223.91002OpenAlexW648260396MaRDI QIDQ549116FDOQ549116
Authors: Vladimir Koltchinskii
Publication date: 7 July 2011
Published in: Lecture Notes in Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-642-22147-7
Recommendations
- Discussion of ``On concentration for (regularized) empirical risk minimization by Sara van de Geer and Martin Wainwright
- Oracle inequalities for local and global empirical risk minimizers
- Sharp oracle inequalities in low rank estimation
- General nonexact oracle inequalities for classes with a subexponential envelope
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Estimation in multivariate analysis (62H12) Random matrices (probabilistic aspects) (60B20) Signal detection and filtering (aspects of stochastic processes) (60G35) Research exposition (monographs, survey articles) pertaining to game theory, economics, and finance (91-02) Special problems of linear programming (transportation, multi-index, data envelopment analysis, etc.) (90C08)
Cited In (only showing first 100 items - show all)
- The partial linear model in high dimensions
- Directed Community Detection With Network Embedding
- Bayesian fractional posteriors
- Bounding the expectation of the supremum of empirical processes indexed by Hölder classes
- Low-rank model with covariates for count data with missing values
- Low rank estimation of similarities on graphs
- Outlier detection in networks with missing links
- Dimensionality reduction with subgaussian matrices: a unified theory
- On the prediction loss of the Lasso in the partially labeled setting
- \(L_1\)-penalization in functional linear regression with subgaussian design
- Optimal learning with \textit{Q}-aggregation
- Sparse recovery under weak moment assumptions
- Concentration of the empirical level sets of Tukey's halfspace depth
- On the asymptotic variance of the debiased Lasso
- Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable
- Regularization and the small-ball method. I: Sparse recovery
- Von Neumann entropy penalization and low-rank matrix estimation
- Sparse and low-rank multivariate Hawkes processes
- Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations
- An exponential inequality for suprema of empirical processes with heavy tails on the left
- Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces
- Oracle inequalities for high-dimensional prediction
- Slope meets Lasso: improved oracle bounds and optimality
- Noisy low-rank matrix completion with general sampling distribution
- On oracle inequalities related to data-driven hard thresholding
- Robust machine learning by median-of-means: theory and practice
- Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model
- Conditional quantile processes based on series or many regressors
- High-dimensional Ising model selection with Bayesian information criteria
- Honest confidence sets in nonparametric IV regression and other ill-posed models
- Partially linear functional quantile regression in a reproducing kernel Hilbert space
- Rademacher complexity for Markov chains: applications to kernel smoothing and Metropolis-Hastings
- An elementary analysis of ridge regression with random design
- A rank-corrected procedure for matrix completion with fixed basis coefficients
- Estimation of low rank density matrices: bounds in Schatten norms and other distances
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Sparse recovery in convex hulls via entropy penalization
- Surrogate losses in passive and active learning
- Nonparametric estimation of low rank matrix valued function
- Optimal prediction of quantile functional linear regression in reproducing kernel Hilbert spaces
- Approximating polyhedra with sparse inequalities
- Low rank estimation of smooth kernels on graphs
- Estimation from nonlinear observations via convex programming with application to bilinear regression
- U-Processes and Preference Learning
- Title not available (Why is that?)
- Tail index estimation, concentration and adaptivity
- Matrix concentration inequalities via the method of exchangeable pairs
- On concentration for (regularized) empirical risk minimization
- Optimal exponential bounds on the accuracy of classification
- Kullback-Leibler aggregation and misspecified generalized linear models
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples
- Some lower bounds on sparse outer approximations of polytopes
- Discussion of ``On concentration for (regularized) empirical risk minimization by Sara van de Geer and Martin Wainwright
- High-dimensional model recovery from random sketched data by exploring intrinsic sparsity
- Learning without concentration
- Sharp oracle inequalities in low rank estimation
- General nonexact oracle inequalities for classes with a subexponential envelope
- Cox process functional learning
- Geometric median and robust estimation in Banach spaces
- Permutational Rademacher Complexity
- Approximate nonparametric quantile regression in reproducing kernel Hilbert spaces via random projection
- On the optimality of the empirical risk minimization procedure for the convex aggregation problem
- Sparse high-dimensional semi-nonparametric quantile regression in a reproducing kernel Hilbert space
- Concentration inequalities for statistical inference
- Learning without concentration for general loss functions
- Phase retrieval: stability and recovery guarantees
- Divide-and-conquer for debiased \(l_1\)-norm support vector machine in ultra-high dimensions
- Sparsity in penalized empirical risk minimization
- Oracle inequalities and optimal inference under group sparsity
- Robust matrix completion
- Matrix completion by singular value thresholding: sharp bounds
- Sparse learning for large-scale and high-dimensional data: a randomized convex-concave optimization approach
- Estimating a network from multiple noisy realizations
- The expected norm of a sum of independent random matrices: an elementary approach
- Concentration inequalities for matrix martingales in continuous time
- Convergence rates of support vector machines regression for functional data
- Robust group synchronization via cycle-edge message passing
- Functional linear regression with Huber loss
- Combinatorial bounds of overfitting for threshold classifiers
- Sharp oracle inequalities for square root regularization
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
- Suboptimality of constrained least squares and improvements via non-linear predictors
- Title not available (Why is that?)
- Title not available (Why is that?)
- Diffeomorphic Registration Using Sinkhorn Divergences
- On least squares estimation under heteroscedastic and heavy-tailed errors
- Performance guarantees for policy learning
- Convergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesics
- Kernelized elastic net regularization: generalization bounds, and sparse recovery
- Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution
- Handling concept drift via model reuse
- Sharp oracle inequalities for low-complexity priors
- ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Regularization and the small-ball method. II: Complexity dependent error rates
- Binary classification with corrupted labels
- Empirical variance minimization with applications in variance reduction and optimal control
- Model selection in utility-maximizing binary prediction
- Title not available (Why is that?)
- Neural network training using \(\ell_1\)-regularization and bi-fidelity data
This page was built for publication: Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q549116)