Parallel depth first search. II: Analysis (Q1116344): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
Set OpenAlex properties.
 
(One intermediate revision by one other user not shown)
Property / cites work
 
Property / cites work: Parallel depth first search. I: Implementation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Depth-first iterative-deepening: An optimal admissible tree search / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3856120 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3817622 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Anomalies in parallel branch-and-bound algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3773322 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On Maintaining Dynamic Information in a Concurrent Environment / rank
 
Normal rank
Property / cites work
 
Property / cites work: MANIP—A Multicomputer Architecture for Solving Combinatonal Extremum-Search Problems / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/bf01389001 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W1988185570 / rank
 
Normal rank

Latest revision as of 10:39, 30 July 2024

scientific article
Language Label Description Also known as
English
Parallel depth first search. II: Analysis
scientific article

    Statements

    Parallel depth first search. II: Analysis (English)
    0 references
    0 references
    0 references
    1987
    0 references
    [For part I see ibid. 16, No.6, 479-499 (1987; Zbl 0665.68048).] The paper presents the analysis of the authors' formulation of a parallel depth first search technique (DFS). At the heat of this formulation is a dynamic work-distribution scheme that divides the work between different processors. In order to compare the proposed strategy on models for l-ring, hypercube and shared-memory architectures an isoefficiency function is introduced. If the problem size, W, needs to grow as f(N) to maintain an efficiency E, then f(N) is the isoefficiency function. Furthermore, the concept of isoefficiency is used in characterizing the scalability of parallel algorithms for which linear speedup for arbitrarily many processors can be obtained by simply increasing the problem size.
    0 references
    depth first search
    0 references
    work-distribution scheme
    0 references
    isoefficiency function
    0 references
    parallel algorithms
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references