Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.

From MaRDI portal
Publication:549116

DOI10.1007/978-3-642-22147-7zbMath1223.91002OpenAlexW648260396MaRDI QIDQ549116

Vladimir I. Koltchinskii

Publication date: 7 July 2011

Published in: Lecture Notes in Mathematics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/978-3-642-22147-7



Related Items

On least squares estimation under heteroscedastic and heavy-tailed errors, On the prediction loss of the Lasso in the partially labeled setting, Optimal prediction of quantile functional linear regression in reproducing kernel Hilbert spaces, Convex optimization learning of faithful Euclidean distance representations in nonlinear dimensionality reduction, Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable, Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model, Binary classification with corrupted labels, Learning with tree tensor networks: complexity estimates and model selection, Empirical variance minimization with applications in variance reduction and optimal control, Neural network training using \(\ell_1\)-regularization and bi-fidelity data, A rank-corrected procedure for matrix completion with fixed basis coefficients, Estimation of low rank density matrices: bounds in Schatten norms and other distances, Learning without concentration for general loss functions, Optimal robust mean and location estimation via convex programs with respect to any pseudo-norms, Low rank estimation of smooth kernels on graphs, Complex sampling designs: uniform limit theorems and applications, Localization of VC classes: beyond local Rademacher complexities, Estimation of partially conditional average treatment effect by double kernel-covariate balancing, Local Rademacher complexity: sharper risk bounds with and without unlabeled samples, Robust statistical learning with Lipschitz and convex loss functions, Convergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesics, Handling concept drift via model reuse, Matrix completion by singular value thresholding: sharp bounds, Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution, On concentration for (regularized) empirical risk minimization, The Expected Norm of a Sum of Independent Random Matrices: An Elementary Approach, Approximating polyhedra with sparse inequalities, A simple homotopy proximal mapping algorithm for compressive sensing, Robust machine learning by median-of-means: theory and practice, On the optimality of the empirical risk minimization procedure for the convex aggregation problem, Tail index estimation, concentration and adaptivity, Nonasymptotic analysis of robust regression with modified Huber's loss, Robust classification via MOM minimization, Communication-efficient sparse composite quantile regression for distributed data, ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels, Kullback-Leibler aggregation and misspecified generalized linear models, Estimating a network from multiple noisy realizations, Von Neumann entropy penalization and low-rank matrix estimation, Noisy low-rank matrix completion with general sampling distribution, Oracle inequalities and optimal inference under group sparsity, Optimal learning with \textit{Q}-aggregation, Matrix concentration inequalities via the method of exchangeable pairs, Phase retrieval: stability and recovery guarantees, An Introduction to Compressed Sensing, Oracle Inequalities for Local and Global Empirical Risk Minimizers, Performance guarantees for policy learning, Bayesian fractional posteriors, Optimal exponential bounds on the accuracy of classification, \(L_1\)-penalization in functional linear regression with subgaussian design, Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces, Oracle inequalities for high-dimensional prediction, Concentration of the empirical level sets of Tukey's halfspace depth, Cox process functional learning, Bounding the expectation of the supremum of empirical processes indexed by Hölder classes, Outlier detection in networks with missing links, Concentration inequalities for matrix martingales in continuous time, Slope meets Lasso: improved oracle bounds and optimality, Robust matrix completion, Dimensionality reduction with subgaussian matrices: a unified theory, Regularization and the small-ball method. I: Sparse recovery, Sparse recovery under weak moment assumptions, Learning sets with separating kernels, Estimation from nonlinear observations via convex programming with application to bilinear regression, Surrogate losses in passive and active learning, Unnamed Item, Low-rank model with covariates for count data with missing values, Model selection in utility-maximizing binary prediction, Some lower bounds on sparse outer approximations of polytopes, Geometric median and robust estimation in Banach spaces, Learning without Concentration, Sharpness estimation of combinatorial generalization ability bounds for threshold decision rules, Minimax estimation of smooth optimal transport maps, On the exponentially weighted aggregate with the Laplace prior, Concentration Inequalities for Statistical Inference, Sharp oracle inequalities for low-complexity priors, Approximate nonparametric quantile regression in reproducing kernel Hilbert spaces via random projection, Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions, Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach, An exponential inequality for suprema of empirical processes with heavy tails on the left, Quantile trace regression via nuclear norm regularization, Permutational Rademacher Complexity, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, A generalized Catoni's M-estimator under finite \(\alpha\)-th moment assumption with \(\alpha \in (1,2)\), Two-level monotonic multistage recommender systems, Convergence rates of support vector machines regression for functional data, Sparse high-dimensional semi-nonparametric quantile regression in a reproducing kernel Hilbert space, Low Rank Estimation of Similarities on Graphs, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, On Lasso refitting strategies, Rademacher complexity for Markov chains: applications to kernel smoothing and Metropolis-Hastings, On the asymptotic variance of the debiased Lasso, Nonparametric estimation of low rank matrix valued function, An elementary analysis of ridge regression with random design, High-dimensional model recovery from random sketched data by exploring intrinsic sparsity, Mean estimation and regression under heavy-tailed distributions: A survey, Conditional quantile processes based on series or many regressors, High-dimensional Ising model selection with Bayesian information criteria, Robust group synchronization via cycle-edge message passing, Functional linear regression with Huber loss, Suboptimality of constrained least squares and improvements via non-linear predictors, On Multiplier Processes Under Weak Moment Assumptions, Low-Rank Covariance Function Estimation for Multidimensional Functional Data, Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations, Partially linear functional quantile regression in a reproducing kernel Hilbert space, Convergence rate of optimal quantization grids and application to empirical measure, Unnamed Item, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Branch-and-bound solves random binary IPs in poly\((n)\)-time, Unbiasing and robustifying implied volatility calibration in a cryptocurrency market with large bid-ask spreads and missing quotes, Concentration behavior of the penalized least squares estimator, Dealing with expert bias in collective decision-making, Directed Community Detection With Network Embedding, Minimax rates for conditional density estimation via empirical entropy, Unnamed Item, Unnamed Item, Unnamed Item, Diffeomorphic Registration Using Sinkhorn Divergences, Noisy linear inverse problems under convex constraints: exact risk asymptotics in high dimensions, Statistical performance of quantile tensor regression with convex regularization, HONEST CONFIDENCE SETS IN NONPARAMETRIC IV REGRESSION AND OTHER ILL-POSED MODELS, Concentration Inequalities for Samples without Replacement, Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery, Communication-efficient estimation of high-dimensional quantile regression, U-Processes and Preference Learning, Combinatorial bounds of overfitting for threshold classifiers, Kernel Meets Sieve: Post-Regularization Confidence Bands for Sparse Additive Model, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, FACTORISABLE MULTITASK QUANTILE REGRESSION, Non parametric learning approach to estimate conditional quantiles in the dependent functional data case, Sharp Oracle Inequalities for Square Root Regularization, Regularization and the small-ball method II: complexity dependent error rates, Unnamed Item, The Partial Linear Model in High Dimensions, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets, Learning Finite-Dimensional Coding Schemes with Nonlinear Reconstruction Maps