Learning rates for the kernel regularized regression with a differentiable strongly convex loss
DOI10.3934/cpaa.2020176zbMath1445.68193OpenAlexW3027856219MaRDI QIDQ2191832
Huanxiang Liu, Hui-min Wang, Bao Huai Sheng
Publication date: 26 June 2020
Published in: Communications on Pure and Applied Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.3934/cpaa.2020176
\(K\)-functionallearning ratereproducing-kernel Hilbert spaceHutchinson metricconjugate lossdifferentiable strongly convex losskernel-regularized regressionmaximum mean discrepancy (MMD)
General nonlinear regression (62J02) Convex programming (90C25) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Related Items (4)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On the robustness of regularized pairwise learning methods based on kernels
- ERM scheme for quantile regression
- The learning rate of \(l_2\)-coefficient regularized classification with strong loss
- Editorial: On the interface of statistics and machine learning
- On qualitative robustness of support vector machines
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Estimating conditional quantiles with the help of the pinball loss
- Optimal rates for regularization of statistical inverse learning problems
- Learning rates for kernel-based expectile regression
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
- A way of construction spherical zonal translation network operators with linear bounded operators
- The convergence rate for a \(K\)-functional in learning theory
- Analysis of support vector machines regression
- Robustness of reweighted least squares kernel based regression
- epsilon-entropy of convex sets and functions
- Error analysis on Hérmite learning with gradient data
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Learning rates for least square regressions with coefficient regularization
- ERM learning with unbounded sampling
- Optimal regression rates for SVMs using Gaussian kernels
- Conditional quantiles with varying Gaussians
- Regularization networks and support vector machines
- Learning sets with separating kernels
- Universality of deep convolutional neural networks
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Optimal rates for the regularized least-squares algorithm
- Application of integral operator for regularized least-square regression
- The convergence rate of semi-supervised regression with quadratic loss
- On approximation by reproducing kernel spaces in weighted \(L^p\) spaces
- A review on consistency and robustness properties of support vector machines for heavy-tailed distributions
- Consistency and robustness of kernel-based regression in convex risk minimization
- Approximation with polynomial kernels and SVM classifiers
- Learning rates of least-square regularized regression
- On approximation by spherical zonal translation networks based on Bochner-Riesz means
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Application of integral operator for vector-valued regression learning
- Learning rates for the risk of kernel-based quantile regression estimators in additive models
- Convergence rates of Kernel Conjugate Gradient for random design regression
- CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION
- Hilbert space embeddings and metrics on probability measures
- Convex Optimization in Normed Spaces
- Covering Numbers for Convex Functions
- GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY, ALGORITHMIC STABILITY AND DATA QUALITY
- Support Vector Machines
- Capacity of reproducing kernel spaces in learning theory
- Consistency of kernel-based quantile regression
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Deep distributed convolutional neural networks: Universality
- Gradient descent for robust kernel-based regression
- Kernel Mean Embedding of Distributions: A Review and Beyond
- Convergence rate of SVM for kernel-based robust regression
- 10.1162/1532443041827925
- Fast learning rates for regularized regression algorithms
- On the K-functional in learning theory
- A representer theorem for deep kernel learning
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
- Convex analysis and monotone operator theory in Hilbert spaces
- The elements of statistical learning. Data mining, inference, and prediction
This page was built for publication: Learning rates for the kernel regularized regression with a differentiable strongly convex loss