Estimating conditional quantiles with the help of the pinball loss
From MaRDI portal
Publication:637098
DOI10.3150/10-BEJ267zbMath1284.62235arXiv1102.2101WikidataQ59196384 ScholiaQ59196384MaRDI QIDQ637098
Ingo Steinwart, Andreas Christmann
Publication date: 2 September 2011
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1102.2101
Asymptotic properties of nonparametric inference (62G20) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Nonparametric estimation (62G05)
Related Items
Asymmetric least squares support vector machine classifiers ⋮ Conformal Prediction: A Gentle Introduction ⋮ Asymmetric \(\nu\)-tube support vector regression ⋮ Statistical consistency of coefficient-based conditional quantile regression ⋮ An SVM-like approach for expectile regression ⋮ Approximation on variable exponent spaces by linear integral operators ⋮ Kernel-based sparse regression with the correntropy-induced loss ⋮ A unified penalized method for sparse additive quantile models: an RKHS approach ⋮ Robust support vector quantile regression with truncated pinball loss (RSVQR) ⋮ On Lagrangian L2-norm pinball twin bounded support vector machine via unconstrained convex minimization ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Approximation analysis of learning algorithms for support vector regression and quantile regression ⋮ Error analysis of classification learning algorithms based on LUMs loss ⋮ Mitigating robust overfitting via self-residual-calibration regularization ⋮ Estimation of quantile oriented sensitivity indices ⋮ Separability of reproducing kernel spaces ⋮ Stochastic online convex optimization. Application to probabilistic time series forecasting ⋮ Optimal regression rates for SVMs using Gaussian kernels ⋮ Conditional quantiles with varying Gaussians ⋮ Unnamed Item ⋮ Online learning for quantile regression and support vector regression ⋮ Fast learning from \(\alpha\)-mixing observations ⋮ Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮ Learning Theory Estimates with Observations from General Stationary Stochastic Processes ⋮ A Robust Regression Framework with Laplace Kernel-Induced Loss ⋮ A simpler approach to coefficient regularized support vector machines regression ⋮ Learning with varying insensitive loss ⋮ Analysis of approximation by linear operators on variable \(L_\rho^{p(\cdot)}\) spaces and applications in learning theory ⋮ Convergence rate of SVM for kernel-based robust regression ⋮ A new comparison theorem on conditional quantiles ⋮ Calibration of \(\epsilon\)-insensitive loss in support vector machines regression ⋮ An introduction to copula-based bivariate reliability Concepts ⋮ Nonparametric estimation of a maximum of quantiles ⋮ Learning rates for kernel-based expectile regression ⋮ Coefficient-based regularization network with variance loss for error ⋮ Perturbation of convex risk minimization and its application in differential private learning algorithms ⋮ Robust kernel-based distribution regression ⋮ Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space ⋮ Estimation of scale functions to model heteroscedasticity by regularised kernel-based quantile methods ⋮ A new support vector machine plus with pinball loss ⋮ Measuring the Capacity of Sets of Functions in the Analysis of ERM ⋮ Moving quantile regression ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ Adaptive learning rates for support vector machines working on data with low intrinsic dimension ⋮ Sparse additive machine with ramp loss ⋮ Unnamed Item
Cites Work
- On consistency and robustness properties of support vector machines for heavy-tailed distributions
- Oracle inequalities for support vector machines that are based on random entropy numbers
- Regularization in kernel learning
- Information-theoretic determination of minimax rates of convergence
- Smooth discrimination analysis
- Optimal aggregation of classifiers in statistical learning.
- Optimal rates for the regularized least-squares algorithm
- Statistical performance of support vector machines
- Local Rademacher complexities
- How to compare different loss functions and their risks
- Learning Theory
- Support Vector Machines
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
- Convexity, Classification, and Risk Bounds
- Some applications of concentration inequalities to statistics
- Measure and integration theory. Transl. from the German by Robert B. Burckel
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item