Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness

From MaRDI portal
(Redirected from Publication:366980)




Abstract: We investigate the learning rate of multiple kernel learning (MKL) with ell1 and elastic-net regularizations. The elastic-net regularization is a composition of an ell1-regularizer for inducing the sparsity and an ell2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large, but the number of nonzero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates have ever shown for both ell1 and elastic-net regularizations. Our analysis reveals some relations between the choice of a regularization function and the performance. If the ground truth is smooth, we show a faster convergence rate for the elastic-net regularization with less conditions than ell1-regularization; otherwise, a faster convergence rate for the ell1-regularization is shown.



Cites work


Cited in
(28)


Describes a project that uses

Uses Software





This page was built for publication: Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q366980)