A nonsmooth version of the univariate optimization algorithm for locating the nearest extremum (locating extremum in nonsmooth univariate optimization) (Q944053)

From MaRDI portal
scientific article
Language Label Description Also known as
English
A nonsmooth version of the univariate optimization algorithm for locating the nearest extremum (locating extremum in nonsmooth univariate optimization)
scientific article

    Statements

    A nonsmooth version of the univariate optimization algorithm for locating the nearest extremum (locating extremum in nonsmooth univariate optimization) (English)
    0 references
    12 September 2008
    0 references
    The author presents an algorithm for solving nonsmooth univariate minimization problems. Finding the step size along the direction vector involves solving a minimization subproblem which is an unidimensional search problem. Hence, unidimensional search methods are most indispensable and the efficiency of any algorithm partly depends on them. In this work an algorithm for univariate optimization using a linear lower bounding function is extended to the nonsmooth case by using the generalized gradient instead of the derivative. A convergence theorem is proved under the condition of semismoothness. This approach gives a globally superlinear convergence of the algorithm, which is a generalized Newton-type method. Four test minimization problems are solved: two with smooth functions and two with nonsmooth ones.
    0 references
    Univariate optimization
    0 references
    unconstrained optimization
    0 references
    linear bounding function
    0 references
    semismooth function
    0 references
    numerical examples
    0 references
    algorithm
    0 references
    superlinear convergence
    0 references
    Newton-type method
    0 references

    Identifiers