Two fast parallel prime number sieves (Q1336051): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Normalize DOI.
 
(5 intermediate revisions by 4 users not shown)
Property / DOI
 
Property / DOI: 10.1006/inco.1994.1082 / rank
Normal rank
 
Property / author
 
Property / author: Jonathan P. Sorenson / rank
Normal rank
 
Property / reviewed by
 
Property / reviewed by: A. Bijlsma / rank
Normal rank
 
Property / author
 
Property / author: Jonathan P. Sorenson / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: A. Bijlsma / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1006/inco.1994.1082 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W1996893421 / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1006/INCO.1994.1082 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 18:20, 10 December 2024

scientific article
Language Label Description Also known as
English
Two fast parallel prime number sieves
scientific article

    Statements

    Two fast parallel prime number sieves (English)
    0 references
    0 references
    0 references
    12 October 1994
    0 references
    The paper presents two parallel algorithms that list all prime numbers up to \(n\), a problem for which sequential algorithms of complexity \(O(n/ \log \log n)\) are known [\textit{H. G. Mairson}, Commun. ACM 20, 664-669 (1977; Zbl 0355.68040); \textit{P. Pritchard}, Comm. ACM 24, 18-23, 772 (1981; Zbl 0454.68084)]. In the `Exclusive Read, Exclusive Write' algebraic parallel random access model of computation, the first algorithm uses \(O(\log n)\) time and \(O(n/(\log n\log\log n))\) processors. It is based on a sequential algorithm of \textit{P. Pritchard} [Sci. Comput. Program. 9, 17-35 (1987; Zbl 0627.68033)]. The second algorithm uses \(O (\sqrt n)\) time and \(O (\sqrt n)\) processors, so it is theoretically less efficient. However, because of the absence of very fine-grained parallelism, it may be more efficient in practice. This observation is formalized by analyzing both algorithms in a model of parallel computation that allows for the communication latency inherent in a globally shared memory [\textit{A. Aggarwal}, \textit{A. K. Chandra}, \textit{M. Snir}, Annual ACM Symposium on parallel algorithms and architectures, 1, 11-21 (1989)].
    0 references
    globally shared memory
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references