Time-optimal message-efficient work performance in the presence of faults
From MaRDI portal
Publication:5361406
DOI10.1145/197917.198082zbMath1373.68090OpenAlexW2007754771MaRDI QIDQ5361406
Alain Mayer, Roberto De Prisco, Mordechai M. Yung
Publication date: 29 September 2017
Published in: Proceedings of the thirteenth annual ACM symposium on Principles of distributed computing - PODC '94 (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1145/197917.198082
Analysis of algorithms and problem complexity (68Q25) Distributed systems (68M14) Distributed algorithms (68W15)
Related Items (16)
Performing work in broadcast networks ⋮ Dynamic load balancing with group communication ⋮ Deterministic Fault-Tolerant Distributed Computing in Linear Time and Communication ⋮ Performing tasks on synchronous restartable message-passing processors ⋮ The complexity of synchronous iterative Do-All with crashes ⋮ Dealing with undependable workers in decentralized network supercomputing ⋮ Emulating shared-memory do-all algorithms in asynchronous message-passing systems ⋮ Doing-it-all with bounded work and communication ⋮ Ordered and delayed adversaries and how to work against them on a shared channel ⋮ The Do-All problem with Byzantine processor failures ⋮ A robust randomized algorithm to perform independent tasks ⋮ Randomization helps to perform independent tasks reliably ⋮ Cooperative computing with fragmentable and mergeable groups ⋮ Parallel computing, failure recovery, and extreme values ⋮ Efficient gossip and robust distributed computation ⋮ Performing work with asynchronous processors: Message-delay-sensitive bounds
This page was built for publication: Time-optimal message-efficient work performance in the presence of faults