Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem
From MaRDI portal
Publication:5378343
DOI10.1162/NECO_a_00566zbMath1412.68187OpenAlexW2156230286WikidataQ50692253 ScholiaQ50692253MaRDI QIDQ5378343
Publication date: 12 June 2019
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco_a_00566
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Multi-kernel regularized classifiers
- Learning rates of multi-kernel regularized regression
- Fast rates for support vector machines using Gaussian kernels
- The covering number in learning theory
- New approaches to statistical learning theory
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Weak convergence and empirical processes. With applications to statistics
- On the mathematical foundations of learning
- Rademacher Chaos Complexities for Learning the Kernel Problem
- Learning Theory
- Uniform Central Limit Theorems
- 10.1162/153244302760200713
- 10.1162/153244303321897690
- Neural Network Learning
- Learning Bounds for Support Vector Machines with Learned Kernels
- Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection
- Advanced Lectures on Machine Learning
- Convexity, Classification, and Risk Bounds
This page was built for publication: Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem