Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
From MaRDI portal
(Redirected from Publication:366980)
Abstract: We investigate the learning rate of multiple kernel learning (MKL) with and elastic-net regularizations. The elastic-net regularization is a composition of an -regularizer for inducing the sparsity and an -regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large, but the number of nonzero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates have ever shown for both and elastic-net regularizations. Our analysis reveals some relations between the choice of a regularization function and the performance. If the ground truth is smooth, we show a faster convergence rate for the elastic-net regularization with less conditions than -regularization; otherwise, a faster convergence rate for the -regularization is shown.
Recommendations
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- Learning rates of multitask kernel methods
- Sparsity in multiple kernel learning
- Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions
- Learning rates for multi-kernel linear programming classifiers
- Learning rates of multi-kernel regularized regression
- On the convergence rate of \(l_{p}\)-norm multiple kernel learning
- Optimal learning rates for kernel partial least squares
- On multiple kernel learning methods
- An efficient multiple kernel learning in reproducing kernel Hilbert spaces (RKHS)
Cites work
- scientific article; zbMATH DE number 5957287 (Why is no real title available?)
- scientific article; zbMATH DE number 192914 (Why is no real title available?)
- scientific article; zbMATH DE number 1950576 (Why is no real title available?)
- scientific article; zbMATH DE number 6253899 (Why is no real title available?)
- Algorithms for learning kernels based on centered alignment
- An introduction to support vector machines and other kernel-based learning methods.
- Choosing multiple parameters for support vector machines
- Consistency of the group Lasso and multiple kernel learning
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Eigenvalues of integral operators defined by smooth positive definite kernels
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- Function Classes That Approximate the Bayes Risk
- High-dimensional additive modeling
- Learning Bounds for Support Vector Machines with Learned Kernels
- Learning the kernel function via regularization
- Learning the kernel matrix with semidefinite programming
- Minimax-optimal rates for sparse additive models over kernel classes via convex programming
- On the convergence rate of \(l_{p}\)-norm multiple kernel learning
- Optimal rates for the regularized least-squares algorithm
- Regularization and Variable Selection Via the Elastic Net
- Simultaneous analysis of Lasso and Dantzig selector
- Some results on Tchebycheffian spline functions and stochastic processes
- Sparsity in multiple kernel learning
- Support Vector Machines
- The Group Lasso for Logistic Regression
- Weak convergence and empirical processes. With applications to statistics
Cited in
(28)- Statistical inference in sparse high-dimensional additive models
- Locally adaptive sparse additive quantile regression model with TV penalty
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- Minimax optimal estimation in partially linear additive models under high dimension
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Kernelized elastic net regularization: generalization bounds, and sparse recovery
- Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces
- Grouped variable selection with discrete optimization: computational and statistical perspectives
- Extreme eigenvalues of nonlinear correlation matrices with applications to additive models
- PAC-Bayesian estimation and prediction in sparse additive models
- Doubly penalized estimation in additive regression with high-dimensional data
- Multiple Kernel Learningの学習理論
- scientific article; zbMATH DE number 7415083 (Why is no real title available?)
- Asymptotically faster estimation of high-dimensional additive models using subspace learning
- High-Dimensional Feature Selection by Feature-Wise Kernelized Lasso
- A semiparametric model for matrix regression
- scientific article; zbMATH DE number 7625155 (Why is no real title available?)
- Regularized learning schemes in feature Banach spaces
- Distributed learning for sketched kernel regression
- Improved Estimation of High-dimensional Additive Models Using Subspace Learning
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- Approximate nonparametric quantile regression in reproducing kernel Hilbert spaces via random projection
- Sparse high-dimensional semi-nonparametric quantile regression in a reproducing kernel Hilbert space
- Sparse additive support vector machines in bounded variation space
- Decentralized learning over a network with Nyström approximation using SGD
- Sparse multiple kernel learning: minimax rates with random projection
- Additive model selection
This page was built for publication: Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q366980)