A New Approach to Distributed Hypothesis Testing and Non-Bayesian Learning: Improved Learning Rate and Byzantine Resilience

From MaRDI portal
Publication:4957701

DOI10.1109/TAC.2020.3033126zbMATH Open1471.93021arXiv1907.03588OpenAlexW3094118074MaRDI QIDQ4957701FDOQ4957701


Authors: Aritra Mitra, John A. Richards, Shreyas Sundaram Edit this on Wikidata


Publication date: 9 September 2021

Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)

Abstract: We study a setting where a group of agents, each receiving partially informative private signals, seek to collaboratively learn the true underlying state of the world (from a finite set of hypotheses) that generates their joint observation profiles. To solve this problem, we propose a distributed learning rule that differs fundamentally from existing approaches, in that it does not employ any form of "belief-averaging". Instead, agents update their beliefs based on a min-rule. Under standard assumptions on the observation model and the network structure, we establish that each agent learns the truth asymptotically almost surely. As our main contribution, we prove that with probability 1, each false hypothesis is ruled out by every agent exponentially fast at a network-independent rate that is strictly larger than existing rates. We then develop a computationally-efficient variant of our learning rule that is provably resilient to agents who do not behave as expected (as represented by a Byzantine adversary model) and deliberately try to spread misinformation.


Full work available at URL: https://arxiv.org/abs/1907.03588








Cited In (4)





This page was built for publication: A New Approach to Distributed Hypothesis Testing and Non-Bayesian Learning: Improved Learning Rate and Byzantine Resilience

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4957701)