Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness

From MaRDI portal
Publication:366980

DOI10.1214/13-AOS1095zbMATH Open1273.62090arXiv1203.0565MaRDI QIDQ366980FDOQ366980


Authors: Taiji Suzuki, Masashi Sugiyama Edit this on Wikidata


Publication date: 25 September 2013

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: We investigate the learning rate of multiple kernel learning (MKL) with ell1 and elastic-net regularizations. The elastic-net regularization is a composition of an ell1-regularizer for inducing the sparsity and an ell2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large, but the number of nonzero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates have ever shown for both ell1 and elastic-net regularizations. Our analysis reveals some relations between the choice of a regularization function and the performance. If the ground truth is smooth, we show a faster convergence rate for the elastic-net regularization with less conditions than ell1-regularization; otherwise, a faster convergence rate for the ell1-regularization is shown.


Full work available at URL: https://arxiv.org/abs/1203.0565




Recommendations




Cites Work


Cited In (27)

Uses Software





This page was built for publication: Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q366980)