A theory of learning from different domains

From MaRDI portal
Publication:1959571


DOI10.1007/s10994-009-5152-4zbMath1470.68081WikidataQ63199026 ScholiaQ63199026MaRDI QIDQ1959571

Yanyan Li

Publication date: 7 October 2010

Published in: Machine Learning (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10994-009-5152-4


62H30: Classification and discrimination; cluster analysis (statistical aspects)

68T05: Learning and adaptive systems in artificial intelligence


Related Items

The Out-of-Source Error in Multi-Source Cross Validation-Type Procedures, A theoretical framework for deep transfer learning, Unnamed Item, Unnamed Item, Semantic Image Segmentation: Two Decades of Research, Unnamed Item, Mismatched Training and Test Distributions Can Outperform Matched Ones, Intentional Control of Type I Error Over Unconscious Data Distortion: A Neyman–Pearson Approach to Text Classification, Reducing bias to source samples for unsupervised domain adaptation, Estimating the Area under the ROC Curve When Transporting a Prediction Model to a Target Population, Domain adversarial neural networks for domain generalization: when it works and how to improve, MapFlow: latent transition via normalizing flow for unsupervised domain adaptation, NaCL: noise-robust cross-domain contrastive learning for unsupervised domain adaptation, Interpretable domain adaptation via optimization over the Stiefel manifold, A kernel learning framework for domain adaptation learning, Learning models with uniform performance via distributionally robust optimization, Learning kernels for unsupervised domain adaptation with applications to visual object recognition, Model-driven domain adaptation on product manifolds for unconstrained face recognition, Relative deviation learning bounds and generalization with unbounded loss functions, Multi-domain learning by confidence-weighted parameter combination, A theory of learning from different domains, On generalization in moment-based domain adaptation, Learning smooth representations with generalized softmax for unsupervised domain adaptation, Unsupervised domain adaptation in the wild via disentangling representation learning, Unsupervised domain adaptation with non-stochastic missing data, Marginal singularity and the benefits of labels in covariate-shift, Adaptive transfer learning, Statistical learning from biased training samples, A no-free-lunch theorem for multitask learning, PAC-Bayesian lifelong learning for multi-armed bandits, Improved linear regression prediction by transfer learning, Communication-efficient distributed multi-task learning with matrix sparsity regularization, KS(conf): a light-weight test if a multiclass classifier operates outside of its specifications, Robust unsupervised domain adaptation for neural networks via moment alignment, Adversarial domain adaptation network for tumor image diagnosis, Robust domain adaptation, Nuclear discrepancy for single-shot batch active learning, On the analysis of adaptability in multi-source domain adaptation, Fast rates by transferring from auxiliary hypotheses, On the benefits of representation regularization in invariance based domain generalization



Cites Work