An algorithm for composite nonsmooth optimization problems (Q1057188): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
Property / cites work
 
Property / cites work: Generalized Gradients and Applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4182270 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3928936 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimality conditions for piecewise smooth functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3915937 / rank
 
Normal rank
Property / cites work
 
Property / cites work: An Efficient Method to Solve the Minimax Problem Directly / rank
 
Normal rank
Property / cites work
 
Property / cites work: A model algorithm for composite nondifferentiable optimization problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Combined lp and quasi-Newton methods for minimax optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Variable metric methods for minimizing a class of nondifferentiable functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Projected Lagrangian Algorithm for Nonlinear Minimax Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3882253 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4199833 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Steplength algorithms for minimizing a class of nondifferentiable functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: The computation of Lagrange-multiplier estimates for constrained minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonlinear programming via an exact penalty function: Asymptotic analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonlinear programming via an exact penalty function: Global analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Numerically stable methods for quadratic programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4403648 / rank
 
Normal rank
Property / cites work
 
Property / cites work: The watchdog technique for forcing convergence in algorithms for constrained optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Non-linear minimax optimization as a sequence of least<i>p</i>th optimization with finite values of<i>p</i> / rank
 
Normal rank
Property / cites work
 
Property / cites work: Algorithms for nonlinear constraints that use lagrangian functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Differential Correction Algorithm for Rational $\ell _\infty $-Approximation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5630864 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Numerical Solution of Systems of Nonlinear Equations / rank
 
Normal rank
Property / cites work
 
Property / cites work: An Ideal Penalty Function for Constrained Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: A note on the computation of an orthonormal basis for the null space of a matrix / rank
 
Normal rank

Revision as of 16:28, 14 June 2024

scientific article
Language Label Description Also known as
English
An algorithm for composite nonsmooth optimization problems
scientific article

    Statements

    An algorithm for composite nonsmooth optimization problems (English)
    0 references
    0 references
    1986
    0 references
    Nonsmooth optimization problems are divided into two categories. The first is composite nonsmooth problems where the generalized gradient can be approximated by information available at the current point. The second is basic nonsmooth problems where the generalized gradient must be approximated using information calculated at previous iterates. Methods for minimizing composite nonsmooth problems where the nonsmooth function is made up from a finite number of smooth functions, and in particular max functions, are considered. A descent method which uses an active set strategy, a nonsmooth line search, and a quasi-Newton approximation to the reduced Hessian of a Lagrangian function is presented. The theoretical properties of the method are discussed and favourable numerical experience on a wide range of test problems is reported.
    0 references
    reduced curvature approximations
    0 references
    max functions
    0 references
    Nonsmooth optimization
    0 references
    generalized gradient
    0 references
    descent method
    0 references
    active set strategy
    0 references
    quasi-Newton approximation
    0 references
    numerical experience
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references