When is there a representer theorem? Vector versus matrix regularizers
zbMATH Open1235.68128MaRDI QIDQ2880982FDOQ2880982
Authors: Andreas Argyriou, Massimiliano Pontil, Charles A. Micchelli
Publication date: 17 April 2012
Published in: Journal of Machine Learning Research (JMLR) (Search for Journal in Brave)
Full work available at URL: http://www.jmlr.org/papers/v10/argyriou09a.html
Recommendations
- When is there a representer theorem? Nondifferentiable regularisers and Banach spaces
- When is there a representer theorem? Reflexive Banach spaces
- scientific article; zbMATH DE number 1804115
- A unifying representer theorem for inverse problems and machine learning
- Regularized learning in Banach spaces as an optimization problem: representer theorems
Classification and discrimination; cluster analysis (statistical aspects) (62H30) General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Cited In (22)
- Title not available (Why is that?)
- An algebraic characterization of the optimum of regularized kernel methods
- Functional reproducing kernel Hilbert spaces for non-point-evaluation functional data
- KOC+: kernel ridge regression based one-class classification using privileged information
- Title not available (Why is that?)
- A duality approach to regularized learning problems in Banach spaces
- Efficiently learning the preferences of people
- Generalized semi-inner products with applications to regularized learning
- Title not available (Why is that?)
- System identification using kernel-based regularization: new insights on stability and consistency issues
- Kernelization of matrix updates, when and how?
- Finite rank kernels for multi-task learning
- When is there a representer theorem? Reflexive Banach spaces
- Regularized learning in Banach spaces as an optimization problem: representer theorems
- When is there a representer theorem? Nondifferentiable regularisers and Banach spaces
- Nyström-based approximate kernel subspace learning
- Generalized Mercer Kernels and Reproducing Kernel Banach Spaces
- Learning with tensors: a framework based on convex optimization and spectral regularization
- Decentralized learning over a network with Nyström approximation using SGD
- Learning with infinitely many features
- A unifying representer theorem for inverse problems and machine learning
- Learning rates of multitask kernel methods
This page was built for publication: When is there a representer theorem? Vector versus matrix regularizers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2880982)