Kernels for vector-valued functions: a review

From MaRDI portal
Publication:2903301

DOI10.1561/2200000036zbMATH Open1301.68212arXiv1106.6251OpenAlexW4206212643WikidataQ57831467 ScholiaQ57831467MaRDI QIDQ2903301FDOQ2903301


Authors: Mauricio A. Álvarez, Lorenzo Rosasco, Neil D. Lawrence Edit this on Wikidata


Publication date: 8 August 2012

Published in: Foundations and Trends in Machine Learning (Search for Journal in Brave)

Abstract: Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.


Full work available at URL: https://arxiv.org/abs/1106.6251




Recommendations




Cited In (81)





This page was built for publication: Kernels for vector-valued functions: a review

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2903301)