Learning with Convex Loss and Indefinite Kernels
From MaRDI portal
Publication:5378314
DOI10.1162/NECO_a_00535zbMath1410.68326OpenAlexW2044874115WikidataQ46897398 ScholiaQ46897398MaRDI QIDQ5378314
Di-Rong Chen, Fenghong Yang, Hongzhi Tong
Publication date: 12 June 2019
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco_a_00535
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Least square regression with indefinite kernels and coefficient regularization
- Learning with varying insensitive loss
- Model selection for regularized least-squares algorithm in learning theory
- Regularization in kernel learning
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
- Analysis of support vector machines regression
- A Bennett concentration inequality and its application to suprema of empirical processes
- Weak convergence and empirical processes. With applications to statistics
- An approximation theory approach to learning with \(\ell^1\) regularization
- Concentration estimates for learning with unbounded sampling
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Learning with sample dependent hypothesis spaces
- Consistency and robustness of kernel-based regression in convex risk minimization
- Approximation with polynomial kernels and SVM classifiers
- Learning rates of least-square regularized regression
- Multiscale kernels
- Local Rademacher complexities
- Learning theory estimates via integral operators and their approximations
- Local polynomial reproduction and moving least squares approximation
- On the mathematical foundations of learning
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- 10.1162/1532443041424319
- Theory of Reproducing Kernels
- Robust Statistics