A neurodynamic approach to nonlinear optimization problems with affine equality and convex inequality constraints (Q2182907)

From MaRDI portal
Revision as of 03:47, 20 March 2024 by Openalex240319060354 (talk | contribs) (Set OpenAlex properties.)
scientific article
Language Label Description Also known as
English
A neurodynamic approach to nonlinear optimization problems with affine equality and convex inequality constraints
scientific article

    Statements

    A neurodynamic approach to nonlinear optimization problems with affine equality and convex inequality constraints (English)
    0 references
    0 references
    0 references
    26 May 2020
    0 references
    The paper deals with the weakest possible conditions that optimization problems can be treated by using a recurrent neural network. Convex, generalized convex as well as nonlinear nonconvex problems are considered. The Introduction gives a good overview of the existing results in the literature, followed by four remarks showing that the network used in this paper is really new. Chapter 2 recalls needed definitions. Chapter 3 gives the optimization problem P (finite-dimensional space, convex inequalities, linear equalities with full row rank, the objective function is not necessarily convex or smooth), where some conditions must be fulfilled: Slater conditions, boundedness of the feasible domain, regularity and Lipschitz property of the objective function. Furthermore, the recurrent neural network to solve P is presented being a nonautonomous differential inclusion. Two figures support the mathematical presentation. Chapter 4 gives the theoretical analysis starting with the definition of a critical point of P and the state solution of the network together with its convergence behavior as for instance that the state of neural network enters the feasible region in finite time and remains thereafter, about the distance to the set of critical points and about relations to Kuhn-Tucker points of P and finally, if the objective function is pseudoconvex, then the state of the network is globally convergent to an optimal solution of P. Chapter 5 starts with the definition of a slow solution of a (common) differential inclusion and it is shown, that a solution of the network (with special initial point) is just its slow solution and is unique, if the objective function is convex Five test examples with remarks and figures supplement the paper.
    0 references
    0 references
    0 references
    0 references
    0 references
    nonlinear optimization problems
    0 references
    recurrent neural network
    0 references
    Lyapunov function
    0 references
    global convergence
    0 references
    critical points
    0 references
    slow solution
    0 references
    0 references