Supervised Learning Under Distributed Features

From MaRDI portal
Publication:4628252

DOI10.1109/TSP.2018.2881661zbMATH Open1414.90370arXiv1805.11384OpenAlexW2805185133MaRDI QIDQ4628252FDOQ4628252


Authors: Bicheng Ying, Kun Yuan, Ali H. Sayed Edit this on Wikidata


Publication date: 6 March 2019

Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)

Abstract: This work studies the problem of learning under both large datasets and large-dimensional feature space scenarios. The feature information is assumed to be spread across agents in a network, where each agent observes some of the features. Through local cooperation, the agents are supposed to interact with each other to solve an inference problem and converge towards the global minimizer of an empirical risk. We study this problem exclusively in the primal domain, and propose new and effective distributed solutions with guaranteed convergence to the minimizer with linear rate under strong convexity. This is achieved by combining a dynamic diffusion construction, a pipeline strategy, and variance-reduced techniques. Simulation results illustrate the conclusions.


Full work available at URL: https://arxiv.org/abs/1805.11384







Cited In (4)





This page was built for publication: Supervised Learning Under Distributed Features

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4628252)