Error bounds for \(l^p\)-norm multiple kernel learning with least square loss (Q448851): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
ReferenceBot (talk | contribs)
Changed an Item
 
(4 intermediate revisions by 4 users not shown)
Property / Wikidata QID
 
Property / Wikidata QID: Q58697082 / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: ElemStatLearn / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1155/2012/915920 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W1985786749 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5396629 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3093181 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3093293 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Theory of Reproducing Kernels / rank
 
Normal rank
Property / cites work
 
Property / cites work: 10.1162/153244302760200704 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the mathematical foundations of learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Consistency analysis of spectral regularization algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3174075 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning rates of least-square regularized regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: The elements of statistical learning. Data mining, inference, and prediction / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multi-kernel regularized classifiers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Model Selection and Estimation in Regression with Grouped Variables / rank
 
Normal rank
Property / cites work
 
Property / cites work: Weak convergence and empirical processes. With applications to statistics / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning theory estimates via integral operators and their approximations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Shannon sampling. II: Connections to learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error bounds for learning the kernel / rank
 
Normal rank
Property / cites work
 
Property / cites work: Capacity of reproducing kernel spaces in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: A note on application of integral operator in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Graph-Based Semi-Supervised Learning and Spectral Kernel Design / rank
 
Normal rank
Property / cites work
 
Property / cites work: Statistical Learning Theory: Models, Concepts, and Results / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2880875 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sparsity in penalized empirical risk minimization / rank
 
Normal rank

Latest revision as of 16:31, 5 July 2024

scientific article
Language Label Description Also known as
English
Error bounds for \(l^p\)-norm multiple kernel learning with least square loss
scientific article

    Statements

    Error bounds for \(l^p\)-norm multiple kernel learning with least square loss (English)
    0 references
    0 references
    0 references
    0 references
    7 September 2012
    0 references
    Summary: The problem of learning the kernel function with linear combinations of multiple kernels has attracted considerable attention recently in machine learning. Specially, by imposing an \(l^p\)-norm penalty on the kernel combination coefficient, multiple kernel learning (MKL) was proved useful and effective for theoretical analysis and practical applications. In this paper, we present a theoretical analysis on the approximation error and learning ability of the \(l^p\)-norm MKL. Our analysis shows explicit learning rates for \(l^p\)-norm MKL and demonstrates some notable advantages compared with traditional kernel-based learning algorithms where the kernel is fixed.
    0 references
    0 references
    0 references
    0 references