A Bayesian/information theoretic model of learning to learn via multiple task sampling
From MaRDI portal
Publication:1369061
DOI10.1023/A:1007327622663zbMATH Open0881.68091arXiv1911.06129MaRDI QIDQ1369061FDOQ1369061
Publication date: 1997
Published in: Machine Learning (Search for Journal in Brave)
Abstract: In this paper the problem of learning appropriate bias for an environment of related tasks is examined from a Bayesian perspective. The environment of related tasks is shown to be naturally modelled by the concept of an {em objective} prior distribution. Sampling from the objective prior corresponds to sampling different learning tasks from the environment. It is argued that for many common machine learning problems, although we don't know the true (objective) prior for the problem, we do have some idea of a set of possible priors to which the true prior belongs. It is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by sampling from the objective prior. Bounds are given on the amount of information required to learn a task when it is simultaneously learnt with several other tasks. The bounds show that if the learner has little knowledge of the true prior, and the dimensionality of the true prior is small, then sampling multiple tasks is highly advantageous.
Full work available at URL: https://arxiv.org/abs/1911.06129
Cited In (14)
- Approximate algorithms for neural-Bayesian approaches.
- Local convergence rates of the nonparametric least squares estimator with applications to transfer learning
- A deep multitask learning approach for air quality prediction
- A no-free-lunch theorem for multitask learning
- Title not available (Why is that?)
- Tackling ordinal regression problem for heterogeneous data: sparse and deep multi-task learning approaches
- Joint detection of malicious domains and infected clients
- Robust learning aided by context
- Multi-target support vector regression via correlation regressor chains
- Lifelong learning in costly feature spaces
- A Riemannian gossip approach to subspace learning on Grassmann manifold
- Inductive transfer for learning Bayesian networks
- Bounds on the minimax rate for estimating a prior over a VC class from independent learning tasks
- A theory of transfer learning with applications to active learning
This page was built for publication: A Bayesian/information theoretic model of learning to learn via multiple task sampling
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1369061)