A new conic method for unconstrained minimization (Q1197422): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
ReferenceBot (talk | contribs)
Changed an Item
 
(4 intermediate revisions by 3 users not shown)
Property / reviewed by
 
Property / reviewed by: Morgan A. Hanson / rank
Normal rank
 
Property / reviewed by
 
Property / reviewed by: Morgan A. Hanson / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / cites work
 
Property / cites work: Minimization of functions having Lipschitz continuous first partial derivatives / rank
 
Normal rank
Property / cites work
 
Property / cites work: Conic Approximations and Collinear Scalings for Optimizers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3216440 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3702408 / rank
 
Normal rank
Property / cites work
 
Property / cites work: An algorithm that minimizes homogeneous functions of \(n\) variables in \(n + 2\) iterations and rapidly minimizes general functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: A modified homogeneous algorithm for function minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3675433 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 15:25, 16 May 2024

scientific article
Language Label Description Also known as
English
A new conic method for unconstrained minimization
scientific article

    Statements

    A new conic method for unconstrained minimization (English)
    0 references
    0 references
    16 January 1993
    0 references
    A conic function has the form \[ c(x)={1\over 2}{x^ T Qx\over (1+p^ T x)^ 2}+{b^ T x\over 1+p^ T x}+a, \] where \(Q\) is the \(n\times n\) symmetric conjugacy matrix and \(p\in R^ n\) is the gauge vector defining the horizon of \(c(x)\). This is related to the quadratic form \(q(y)=c(x(y))={1\over 2} y^ T Qy+b^ T y+a\) through the scaling \(y=x/(1+p^ T x)\), \(x=y/(1-p^ T y)\). The function \(c(x)\) has a unique minimum only if \(Q\) is positive definite. In this case, provided that a solution does exist, the minimizer \(\beta\) is given by \(\beta=Q^{- 1}b/(1+p^ T Q^{-1}b)\), \(1+p^ T Q^{-1} B\neq 0\). Most of the conic methods model the objective function \(f(x)\) around \(x\) by a model of the form \[ c(x_ i+d)=f_ i+{g^ T_ i d\over 1+p^ T_ i d}+{{\textstyle{1\over 2}}d^ T qd\over(1+p^ T_ id)^ 2}, \] where \(f_ i\), \(g_ i\), and \(F_ i\) are respectively, the function value, the gradient, and the Hessian of \(f(x)\) at \(x_ i\). However, the authors of this paper use a model of the form \[ c(x)={1\over 2}\cdot{1+p^ T x\over 1+p^ T\beta}\nabla c(x)(x-\beta)+c(\beta), \] which does not involve an estimate of the conjugacy matrix \(Q\) on the Hessian matrix of the objective function. An algorithm is given which is based on this, and it converges in \(n+1\) iterations on conic functions of \(n\) variables. Numerical results for many standard general test functions are given which indicate that the algorithm is very robust and superior in function evaluations and number of iterations to the corresponding quadratic method.
    0 references
    unconstrained minimization
    0 references
    conic function
    0 references
    conic methods
    0 references

    Identifiers