Convex multi-task feature learning
From MaRDI portal
Publication:1009294
DOI10.1007/s10994-007-5040-8zbMath1470.68073OpenAlexW2065180801WikidataQ56031199 ScholiaQ56031199MaRDI QIDQ1009294
Theodoros Evgeniou, Andreas Argyriou, Massimiliano Pontil
Publication date: 31 March 2009
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-007-5040-8
regularizationvector-valued functionskernelscollaborative filteringmulti-task learningtransfer learninginductive transfer
Related Items
A novel ramp loss-based multi-task twin support vector machine with multi-parameter safe acceleration, Variable selection for high‐dimensional generalized linear model with block‐missing data, An upper bound on the minimum rank of a symmetric Toeplitz matrix completion problem, Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm, Sparse and Low-Rank Matrix Quantile Estimation With Application to Quadratic Regression, Deep matrix factorization with knowledge transfer for lifelong clustering and semi-supervised clustering, A unified analysis of convex and non-convex \(\ell_p\)-ball projection problems, Smooth over-parameterized solvers for non-smooth structured optimization, Unnamed Item, Joint Bayesian analysis of multiple response-types using the hierarchical generalized transformation model, Calibrated multi-task subspace learning via binary group structure constraint, Event detection in online social network: methodologies, state-of-art, and evolution, Data driven discovery of systems of ordinary differential equations using nonconvex multitask learning, High-dimensional estimation of quadratic variation based on penalized realized variance, Smoothing fast proximal gradient algorithm for the relaxation of matrix rank regularization problem, Adaptive and robust multi-task learning, Mixed-integer quadratic programming reformulations of multi-task learning models, Reexamining low rank matrix factorization for trace norm regularization, Matrix Poincaré, Φ-Sobolev inequalities, and quantum ensembles, Direct Density Derivative Estimation, Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization, Unnamed Item, A proximal alternating direction method for \(\ell_{2,1}\)-norm least squares problem in multi-task feature learning, Deep Large-Scale Multi-task Learning Network for Gene Expression Inference, A joint convex penalty for inverse covariance matrix estimation, WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions, Low-Rank and Sparse Multi-task Learning, A distributed Frank-Wolfe framework for learning low-rank matrices with the trace norm, Minimum \( n\)-rank approximation via iterative hard thresholding, Regularized high dimension low tubal-rank tensor regression, Advanced conjoint analysis using feature selection via support vector machines, On the robustness of the generalized fused Lasso to prior specifications, Unnamed Item, Which option is a better way to improve transfer learning performance?, Sparse Single Index Models for Multivariate Responses, Robust reduced rank regression in a distributed setting, Joint feature selection and classification for positive unlabelled multi-label data using weighted penalized empirical risk minimization, Data shared Lasso: a novel tool to discover uplift, JGPR: a computationally efficient multi-target Gaussian process regression algorithm, The bounds of restricted isometry constants for low rank matrices recovery, Low rank matrix recovery with impulsive noise, Geometry preserving multi-task metric learning, Learning with infinitely many features, Manifold regularization based on Nyström type subsampling, Multi-task learning via linear functional strategy, Group guided fused Laplacian sparse group Lasso for modeling Alzheimer's disease progression, Guarantees of Riemannian optimization for low rank matrix completion, A unified approach to error bounds for structured convex optimization problems, Approximation accuracy, gradient methods, and error bound for structured convex optimization, Inter-class sparsity based discriminative least square regression, Group variable selection via \(\ell_{p,0}\) regularization and application to optimal scoring, Joint ranking SVM and binary relevance with robust low-rank learning for multi-label classification, Recovery of low-rank matrices based on the rank null space properties, Learning with optimal interpolation norms, Multi-target regression via input space expansion: treating targets as inputs, Max-margin heterogeneous information machine for RGB-D action recognition, Estimating variable structure and dependence in multitask learning via gradients, Matrix completion with sparse measurement errors, Analysis of mobility based COVID-19 epidemic model using federated multitask learning, Group online adaptive learning, Harnessing lab knowledge for real-world action recognition, Low rank matrix minimization with a truncated difference of nuclear norm and Frobenius norm regularization, Finite rank kernels for multi-task learning, Regularizers for structured sparsity, Inferring multiple graphical structures, Multi-output learning via spectral filtering, Analysis on methods to effectively improve transfer learning performance, Composite kernel learning, Multi-domain learning by confidence-weighted parameter combination, Bayesian Sparse Partial Least Squares, Least absolute deviations learning of multiple tasks, kLog: a language for logical and relational learning with kernels, Low-rank representation-based object tracking using multitask feature learning with joint sparsity, Simultaneous nonparametric regression in RADWT dictionaries, Linearized and kernelized sparse multitask learning for predicting cognitive outcomes in Alzheimer's disease, A semantic tree method for image classification and video action recognition, Orthogonal sparse linear discriminant analysis, Exact matrix completion based on low rank Hankel structure in the Fourier domain, Convergence rate analysis of proximal gradient methods with applications to composite minimization problems, Unnamed Item, Rate-optimal perturbation bounds for singular subspaces with applications to high-dimensional statistics, Max-norm optimization for robust matrix recovery, Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, Slice inverse regression with score functions, Learning with tensors: a framework based on convex optimization and spectral regularization, Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions, Alternating direction multiplier method for matrix \(l_{2,1}\)-norm optimization in multitask feature learning problems, Mixed-norm regularization for brain decoding, Unnamed Item, Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM, An accelerated IRNN-iteratively reweighted nuclear norm algorithm for nonconvex nonsmooth low-rank minimization problems, Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach, Low-rank matrix recovery via regularized nuclear norm minimization, Tackling ordinal regression problem for heterogeneous data: sparse and deep multi-task learning approaches, Robust finite mixture regression for heterogeneous targets, Inductive matrix completion with feature selection, Low-rank approximation algorithms for matrix completion with random sampling, \(\ell_{2,0}\)-norm based selection and estimation for multivariate generalized linear models, Utilizing relevant RGB-D data to help recognize RGB images in the target domain, Regularized multidimensional scaling with radial basis functions, Guarantees of Riemannian Optimization for Low Rank Matrix Recovery, Discriminant analysis of regularized multidimensional scaling, Memory networks for fine-grained opinion mining, Sparsity-promoting algorithms for the discovery of informative Koopman-invariant subspaces, Least square regularized regression for multitask learning, Multi-task and Lifelong Learning of Kernels, Generalized Conditional Gradient for Sparse Estimation, Gap Safe screening rules for sparsity enforcing penalties, A Riemannian gossip approach to subspace learning on Grassmann manifold, Joint detection of malicious domains and infected clients, Truncated sparse approximation property and truncated \(q\)-norm minimization, Unnamed Item, Kernel collaborative online algorithms for multi-task learning, Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion, An efficient primal dual prox method for non-smooth optimization, Matrix Support Functionals for Inverse Problems, Regularization, and Learning, Sensitivity of low-rank matrix recovery, Stable recovery of low rank matrices from nuclear norm minimization, Pointwise mutual information sparsely embedded feature selection, Proof methods for robust low-rank matrix recovery, Multilabel classification through random graph ensembles, A unified approach to computing the nearest complex polynomial with a given zero
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Reduced-rank regression for the multivariate linear model
- Optimal rates for the regularized least-squares algorithm
- Nonparametric identification of population models via Gaussian processes
- The Collinearity Problem in Linear Regression. The Partial Least Squares (PLS) Approach to Generalized Inverses
- Canonical Correlation Analysis: An Overview with Application to Learning Methods
- Learning Theory
- 10.1162/153244304322765658
- Learning Theory and Kernel Machines
- Learning Theory
- On Learning Vector-Valued Functions
- RELATIONS BETWEEN TWO SETS OF VARIATES
- Theory of Reproducing Kernels
- The elements of statistical learning. Data mining, inference, and prediction