On the convergence rate of l_p-norm multiple kernel learning
From MaRDI portal
Publication:5405196
Recommendations
- Error bounds for \(l^p\)-norm multiple kernel learning with least square loss
- Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions
- Local Rademacher complexity-based learning guarantees for multi-task learning
- Sparsity in multiple kernel learning
- scientific article; zbMATH DE number 6253899
Cited in
(14)- scientific article; zbMATH DE number 6253899 (Why is no real title available?)
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- scientific article; zbMATH DE number 7295804 (Why is no real title available?)
- An efficient kernel learning algorithm for semisupervised regression problems
- Local Rademacher complexity-based learning guarantees for multi-task learning
- Online primal-dual learning for a data-dependent multi-kernel combination model with multiclass visual categorization applications
- Error bounds for \(l^p\)-norm multiple kernel learning with least square loss
- Multi-task and lifelong learning of kernels
- Training Lp norm multiple kernel learning in the primal
- Fast generalization error bound of deep learning without scale invariance of activation functions
- Refined Rademacher chaos complexity bounds with applications to the multikernel learning problem
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions
- Analysis of target data-dependent greedy kernel algorithms: convergence rates for \(f\)-, \(f \cdot P\)- and \(f/P\)-greedy
This page was built for publication: On the convergence rate of \(l_{p}\)-norm multiple kernel learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5405196)