Parallel processing can be harmful: The unusual behavior of interpolation search (Q1825662)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Parallel processing can be harmful: The unusual behavior of interpolation search |
scientific article |
Statements
Parallel processing can be harmful: The unusual behavior of interpolation search (English)
0 references
1989
0 references
It is known that the interpolation search algorithm for retrieving a key in an ordered array of N entries has expected running time O(log log(N)) when the array entries and the key searched for are generated by the uniform distribution over the (0,1) interval. In fact the expected running time is bounded by log log(N)\(+O(1).\) The impact of parallel processing on this problem is investigated. The results indicate that parallel processing for this problem is not very helpful. If P processors are available an evident improvement on the standard interpolation algorithm, called VARIATION(P) in the paper, is obtained which performs standard interpolation search up to the stage where the remaining segment of the array consists of at most P entries, at which stage these P entries are searched in parallel by the P processors. From the known analysis of the interpolation search algorithm it follows that VARIATION(P) has an expected running time log log(N)-log log(P)\(+O(1)\). The surprising result is that this trivial algorithm represents the optimal way of using P processors. No algorithm which uses P processors can improve on the number of stage by more than an additive constant; moreover, if the communication overhead caused by the need of the processors to exchange information is accounted for by any multiplicative constant \(c>1\) the resulting expected case running time will exceed the running time of the almost sequential algorithm VARIATION(P). The proofs in the paper are based on the techniques in earlier papers by \textit{A. C. Yao} and \textit{F. F. Yao}; probabilistic estimates on Bernoulli trials provide the required estimates that with high probability the length of the array segment still under consideration is reduced by a square root; when P processors are used the size of this segment becomes a factor P smaller, but in the iteration the total gain of these factors P remains a constant.
0 references
parallel algorithms
0 references
interpolation sort
0 references
expected case time analysis
0 references