Proximal Multitask Learning Over Networks With Sparsity-Inducing Coregularization

From MaRDI portal
Publication:4621089

DOI10.1109/TSP.2016.2601282zbMATH Open1414.94437arXiv1509.01360MaRDI QIDQ4621089FDOQ4621089


Authors: R. Nassif, Cédric Richard, A. Ferrari, Ali H. Sayed Edit this on Wikidata


Publication date: 8 February 2019

Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)

Abstract: In this work, we consider multitask learning problems where clusters of nodes are interested in estimating their own parameter vector. Cooperation among clusters is beneficial when the optimal models of adjacent clusters have a good number of similar entries. We propose a fully distributed algorithm for solving this problem. The approach relies on minimizing a global mean-square error criterion regularized by non-differentiable terms to promote cooperation among neighboring clusters. A general diffusion forward-backward splitting strategy is introduced. Then, it is specialized to the case of sparsity promoting regularizers. A closed-form expression for the proximal operator of a weighted sum of ell1-norms is derived to achieve higher efficiency. We also provide conditions on the step-sizes that ensure convergence of the algorithm in the mean and mean-square error sense. Simulations are conducted to illustrate the effectiveness of the strategy.


Full work available at URL: https://arxiv.org/abs/1509.01360




Recommendations




Cited In (2)





This page was built for publication: Proximal Multitask Learning Over Networks With Sparsity-Inducing Coregularization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4621089)