Automatic differentiation of large sparse systems (Q2277762): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: A test problem generator for large-scale unconstrained optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3833694 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Automatic differentiation of large sparse systems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Truncated Newton method for sparse unconstrained optimization using automatic differentiation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4878822 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4727308 / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Arithmetic of Differentiation / rank
 
Normal rank

Latest revision as of 16:31, 21 June 2024

scientific article
Language Label Description Also known as
English
Automatic differentiation of large sparse systems
scientific article

    Statements

    Automatic differentiation of large sparse systems (English)
    0 references
    0 references
    0 references
    0 references
    1990
    0 references
    The authors are concerned with the task of calculating the gradient \(\nabla F(x)\) and the Hessian \(\nabla^ 2F(x)\) of a function F(x), \(x\in {\mathbb{R}}^ n\), by exploiting the features of modern computer languages (for instance, ADA, Pascal-Sc) which allow the definition of operators and data structures. Following previous works by \textit{A. Griewank} [Mathematical programming, Proc. 13th Int. Symp.,Tokyo/Jap. 1988, Math. Appl., Jap. Ser. 6, 83-107 (1989; Zbl 0696.65015)] and \textit{L. B. Rall} [``Optimal implementation of differential arithmetic, Computer arithmetic. Scientific computation and programming language'' (1987; Zbl 0614.65001)] which consider symbolic computation for automatic differentiation, here five new algebras are introduced and discussed. The case of sparsity on the function F(x) is taken into account. The proposed automatic differentiation is then coupled with well-known minimization algorithms, the truncated Newton method as proposed by \textit{R. S. Dembo} [Math. Program. Study 31, 43-71 (1987; Zbl 0635.90072)] and the conjugate gradient algorithm based on the Polak-Ribière formula. Numerical results obtained in solving the extended Dixon problem, the extended Powell problem with \(n=4,8,20,40,80\) and the Dixon-Maany problem with \(n=3000\), are reported. This demonstrates widely the effectiveness of the approach considered by the authors.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    gradient
    0 references
    Hessian
    0 references
    symbolic computation
    0 references
    automatic differentiation
    0 references
    minimization algorithms
    0 references
    truncated Newton method
    0 references
    conjugate gradient algorithm
    0 references
    Numerical results
    0 references
    Dixon problem
    0 references
    Powell problem
    0 references
    Dixon-Maany problem
    0 references
    0 references