A New Approach to Distributed Hypothesis Testing and Non-Bayesian Learning: Improved Learning Rate and Byzantine Resilience
From MaRDI portal
Publication:4957701
Abstract: We study a setting where a group of agents, each receiving partially informative private signals, seek to collaboratively learn the true underlying state of the world (from a finite set of hypotheses) that generates their joint observation profiles. To solve this problem, we propose a distributed learning rule that differs fundamentally from existing approaches, in that it does not employ any form of "belief-averaging". Instead, agents update their beliefs based on a min-rule. Under standard assumptions on the observation model and the network structure, we establish that each agent learns the truth asymptotically almost surely. As our main contribution, we prove that with probability 1, each false hypothesis is ruled out by every agent exponentially fast at a network-independent rate that is strictly larger than existing rates. We then develop a computationally-efficient variant of our learning rule that is provably resilient to agents who do not behave as expected (as represented by a Byzantine adversary model) and deliberately try to spread misinformation.
Cited in
(4)- Graph-theoretic approaches for analyzing the resilience of distributed control systems: a tutorial and survey
- Social Learning and Distributed Hypothesis Testing
- Byzantine-Resilient Distributed Hypothesis Testing With Time-Varying Network Topology
- On the non-resiliency of subsequence reduced resilient consensus in multiagent networks
This page was built for publication: A New Approach to Distributed Hypothesis Testing and Non-Bayesian Learning: Improved Learning Rate and Byzantine Resilience
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4957701)