Adaptive and robust multi-task learning
From MaRDI portal
Publication:6183769
Abstract: We study the multi-task learning problem that aims to simultaneously analyze multiple datasets collected from different sources and learn one model for each of them. We propose a family of adaptive methods that automatically utilize possible similarities among those tasks while carefully handling their differences. We derive sharp statistical guarantees for the methods and prove their robustness against outlier tasks. Numerical experiments on synthetic and real datasets demonstrate the efficacy of our new methods.
Recommendations
Cites work
- scientific article; zbMATH DE number 5957307 (Why is no real title available?)
- scientific article; zbMATH DE number 3868429 (Why is no real title available?)
- scientific article; zbMATH DE number 409717 (Why is no real title available?)
- scientific article; zbMATH DE number 1086070 (Why is no real title available?)
- scientific article; zbMATH DE number 1113195 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 1448976 (Why is no real title available?)
- A Dirty Model for Multiple Sparse Regression
- A no-free-lunch theorem for multitask learning
- A tail inequality for quadratic forms of subgaussian random vectors
- Computer age statistical inference. Algorithms, evidence, and data science
- Convex multi-task feature learning
- Data enriched linear regression
- Estimation of the mean of a multivariate normal distribution
- Estimation with quadratic loss.
- Fused Lasso approach in regression coefficients clustering -- learning parameter heterogeneity in data integration
- Grouping pursuit through a regularization solution surface
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- Homogeneity pursuit
- Ideal spatial adaptation by wavelet shrinkage
- Individual Data Protected Integrative Regression Analysis of High-Dimensional Heterogeneous Data
- Learning Theory and Kernel Machines
- Learning multiple tasks with kernel methods
- Limiting the Risk of Bayes and Empirical Bayes Estimators--Part II: The Empirical Bayes Case
- Multidimensional linear functional estimation in sparse Gaussian models and robust estimation of the mean
- New perspectives on \(k\)-support and cluster norms
- On a Problem of Adaptive Estimation in Gaussian White Noise
- On nonparametric confidence intervals
- Oracle inequalities and optimal inference under group sparsity
- Outlier detection using nonconvex penalized regression
- Parametric robustness: Small biases can be worthwhile
- Robust Estimation of a Location Parameter
- Robust PCA via Outlier Pursuit
- Robust Statistics
- Sparsity and Smoothness Via the Fused Lasso
- Stein's Estimation Rule and Its Competitors--An Empirical Bayes Approach
- Support union recovery in high-dimensional multivariate regression
- The benefit of multitask representation learning
- The landscape of empirical risk for nonconvex losses
- The use of Previous Experience in Reaching Statistical Decisions
- Trace norm regularization: reformulations, algorithms, and multi-task learning
- Two proposals for robust PCA using semidefinite programming
- Wavelet methods in statistics: some recent developments and their applications
Cited in
(3)
This page was built for publication: Adaptive and robust multi-task learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6183769)