Emulating shared-memory do-all algorithms in asynchronous message-passing systems
From MaRDI portal
Publication:666160
DOI10.1016/j.jpdc.2009.12.002zbMath1233.68064OpenAlexW2037854601MaRDI QIDQ666160
Dariusz R. Kowalski, Mariam Momenzadeh, Alexander A. Schwarzmann
Publication date: 7 March 2012
Published in: Journal of Parallel and Distributed Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jpdc.2009.12.002
Distributed systems (68M14) Reliability, testing and fault tolerance of networks and computer systems (68M15) Distributed algorithms (68W15)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Emulating shared-memory do-all algorithms in asynchronous message-passing systems
- Dynamic load balancing with group communication
- A robust randomized algorithm to perform independent tasks
- Cooperative computing with fragmentable and mergeable groups
- Distributed scheduling for disconnected cooperation
- Efficient gossip and robust distributed computation
- Performing work with asynchronous processors: Message-delay-sensitive bounds
- Performing Work Efficiently in the Presence of Faults
- Sharing memory robustly in message-passing systems
- Algorithms for the Certified Write-All Problem
- Parallel Algorithms with Processor Failures and Delays
- Writing-all deterministically and optimally using a nontrivial number of asynchronous processors
- Work-Competitive Scheduling for Cooperative Computing with Dynamic Groups
- Time-optimal message-efficient work performance in the presence of faults
- Principles of Distributed Systems