The bit-cost of some algorithms for the solution of linear systems (Q1108740): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
ReferenceBot (talk | contribs)
Changed an Item
 
(2 intermediate revisions by 2 users not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1016/0898-1221(88)90036-3 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W1969720527 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Fast multiplication of large numbers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Complexity measures for matrix multiplication algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: The bit-complexity of arithmetic algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Fast Multiple-Precision Evaluation of Elementary Functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4871634 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Round-off error analysis of iterations for large linear systems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error analysis of an APA algorithm for the parallel solution of some special Toeplitz linear systems / rank
 
Normal rank

Latest revision as of 18:58, 18 June 2024

scientific article
Language Label Description Also known as
English
The bit-cost of some algorithms for the solution of linear systems
scientific article

    Statements

    The bit-cost of some algorithms for the solution of linear systems (English)
    0 references
    0 references
    1988
    0 references
    The aim of the paper is to compare the costs of algorithms run in sequential or parallel modes. For a particular algorithm `of size n' the author defines two costs, the Bit Sequential Cost and the Bit Parallel Costs: \(BSC(n)=O[ASC(n),A(f(n,t))]\) and \(BPC(n)=O[APC(n),A'(f(n,t))]\) where ASC is the arithmetic sequential cost and APC the arithmetic parallel cost. The functions A and A' are upper bounds on the respective costs of performing arithmetic operations on K digits, and \(f(n,t)\) is the number of digits to be used when performing any arithmetic operation on K digits. Upper bounds are found for these costs for Gaussian elimination, Jacobi and Newton iteration. The paper concludes with a brief analysis of the solution of a triangular Toeplitz linear system.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    computational cost
    0 references
    parallel processors
    0 references
    Jacobi iteration
    0 references
    Bit Sequential Cost
    0 references
    Bit Parallel Costs
    0 references
    Gaussian elimination
    0 references
    Newton iteration
    0 references
    triangular Toeplitz linear system
    0 references
    0 references