Large sparse numerical optimization (Q792069)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Large sparse numerical optimization
scientific article

    Statements

    Large sparse numerical optimization (English)
    0 references
    0 references
    0 references
    1984
    0 references
    In the first chapter of the book the author considers some methods for solving \(Ax=b\) where \(A\) is a large sparse matrix of order \(n\). Two basic approaches -- direct and iterative -- are analysed. In the next chapter he considers the overdetermined linear least squares problem \[ \text{minimize } \| Ax-b\|_2 \] where it will be assumed that \(A\) is of full column rank. In connection with this problem the method of \textit{G. Peters} and \textit{J. H. Wilkinson} [Comput. J. 13, 309--316 (1970; Zbl 0195.44804)] is presented. At the end of this chapter he studies the following problem \[ \text{minimize } \| Ax-b\|_ 2,\quad Cx=d, \] where \(A\) is \(m\) by \(n\) and \(C\) is \(t\) by \(n\) with \(t\le n\le m\). It is assumed that \(\mathrm{rank}(C)=t\). Regarding this problem the following methods are presented: projection method, elimination method, method of Lagrange multipliers and infinite weights method. Chapter 3 deals with some problems of linear programming. Consider the problem: (1) minimize \(C^ Tx\), \(Ax=b\), \(\ell \leq x\leq \mu\), where \(A\) is \(m\) by \(n\) and \(m<n\). Referring to problem (1) the author discusses some up-to-date variants of the simplex algorithm. In chapter 4, entitled ``Nonlinear equations and nonlinear least squares'' he studies the problem (2) minimize \| F(x)\|_ 2\), where \(F = R^m\to R^n\), \(m\geq n\), \(F\) is at least continuously differentiable. In the case \(m=n\) and \(F(x^*)=0\), then (2) can be expressed at the zero-finding problem, (3) solve \(F(x)=0\). For the problem (2) and (3) and for the computation of \(F'(x)\) some algorithms are given. The fifth chapter of the paper deals with the large unconstrained optimization problem: Minimize \(f(x)\), where \(f: R^n\to R^1\) and \(f\) by \(n\) is twice continuously differentiable. It is supposed that: the Hessian \(\nabla^2f(x)\) is large and sparse. In studying this problem the author exposes some new and interesting methods.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    large sparse numerical optimization
    0 references
    linear least squares problem
    0 references
    projection method
    0 references
    elimination method
    0 references
    method of Lagrange multipliers
    0 references
    infinite weights method
    0 references
    simplex algorithm
    0 references
    nonlinear least squares
    0 references
    unconstrained optimization
    0 references
    0 references