Concentration inequalities and asymptotic results for ratio type empirical processes

From MaRDI portal
Publication:2497173

DOI10.1214/009117906000000070zbMath1152.60021arXivmath/0606788OpenAlexW2059529070MaRDI QIDQ2497173

Evarist Giné M., Vladimir I. Koltchinskii

Publication date: 3 August 2006

Published in: The Annals of Probability (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/math/0606788



Related Items

On least squares estimation under heteroscedastic and heavy-tailed errors, Upper functions for \(\mathbb{L}_{p}\)-norms of Gaussian random fields, Berry-Esseen bounds for Chernoff-type nonstandard asymptotics in isotonic regression, Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder), Minimax confidence intervals for the sliced Wasserstein distance, Bounding the expectation of the supremum of an empirical process over a (weak) VC-major class, Global testing against sparse alternatives in time-frequency analysis, On local \(U\)-statistic processes and the estimation of densities of functions of several sample variables, Upper functions for positive random functionals. I: General setting and Gaussian random functions, Complex sampling designs: uniform limit theorems and applications, Localization of VC classes: beyond local Rademacher complexities, Uniform concentration bounds for frequencies of rare events, Adaptive confidence sets in \(L^2\), On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces, Adaptive estimation of a distribution function and its density in sup-norm loss by wavelet and spline projections, Tail index estimation, concentration and adaptivity, Rates of convergence in active learning, Mean estimation in high dimension, Optimal upper and lower bounds for the true and empirical excess risks in heteroscedastic least-squares regression, A local maximal inequality under uniform entropy, Efficient simulation-based minimum distance estimation and indirect inference, The generalization performance of ERM algorithm with strongly mixing observations, Nonparametric regression using deep neural networks with ReLU activation function, Uniform bounds for norms of sums of independent random functions, Uniform central limit theorems for kernel density estimators, On the uniform convergence of empirical norms and inner products, with application to causal inference, Extending the scope of the small-ball method, Uniform central limit theorems for the Grenander estimator, Some new asymptotic theory for least squares series: pointwise and uniform results, Cox process functional learning, A new method for estimation and model selection: \(\rho\)-estimation, Rho-estimators revisited: general theory and applications, Estimation from nonlinear observations via convex programming with application to bilinear regression, Surrogate losses in passive and active learning, On multivariate quantiles under partial orders, A high-dimensional Wilks phenomenon, Semi-supervised learning based on high density region estimation, The optimal PAC bound for intersection-closed concept classes, Approximating \(L_p\) unit balls via random sampling, Estimating a density, a hazard rate, and a transition intensity via the \(\rho\)-estimation method, Set structured global empirical risk minimizers are rate optimal in general dimensions, An exponential inequality for the distribution function of the kernel density estimator, with applications to adaptive estimation, When are epsilon-nets small?, Convergence rates of least squares regression estimators with heavy-tailed errors, Multiplier \(U\)-processes: sharp bounds and applications, Two-level monotonic multistage recommender systems, Sample selection models with monotone control functions, Generalization performance of graph-based semi-supervised classification, The Partial Linear Model in High Dimensions



Cites Work