A neurodynamic approach to nonlinear optimization problems with affine equality and convex inequality constraints (Q2182907): Difference between revisions

From MaRDI portal
Created claim: Wikidata QID (P12): Q93008818, #quickstatements; #temporary_batch_1710979808849
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Q3324260 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3134873 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Neural network for nonsmooth pseudoconvex optimization with general convex constraints / rank
 
Normal rank
Property / cites work
 
Property / cites work: Minimizing the Condition Number of a Gram Matrix / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3768810 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4326408 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimization and nonsmooth analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Generalized Neural Network for Nonsmooth Nonlinear Programming Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Concrete Structure Design using Mixed-Integer Nonlinear Programming with Complementarity Constraints / rank
 
Normal rank
Property / cites work
 
Property / cites work: Direct trajectory optimization using nonlinear programming and collocation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3141900 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Bayesian Compressive Sensing / rank
 
Normal rank
Property / cites work
 
Property / cites work: Using sliding modes in static optimization and nonlinear programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: A one-layer recurrent neural network for constrained nonconvex optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimizing Condition Numbers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Generalized convexity of functions and generalized monotonicity of set-valued maps / rank
 
Normal rank
Property / cites work
 
Property / cites work: Neural network for constrained nonsmooth optimization using Tikhonov regularization / rank
 
Normal rank
Property / cites work
 
Property / cites work: A neurodynamic approach to convex optimization problems with general constraint / rank
 
Normal rank
Property / cites work
 
Property / cites work: Dynamical Analysis of Full-Range Cellular Neural Networks by Exploiting Differential Variational Inequalities / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sliding modes in control and optimization. Transl. from the Russian / rank
 
Normal rank
Property / cites work
 
Property / cites work: A collective neurodynamic optimization approach to bound-constrained nonconvex optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Lagrange programming neural networks / rank
 
Normal rank

Latest revision as of 18:05, 22 July 2024

scientific article
Language Label Description Also known as
English
A neurodynamic approach to nonlinear optimization problems with affine equality and convex inequality constraints
scientific article

    Statements

    A neurodynamic approach to nonlinear optimization problems with affine equality and convex inequality constraints (English)
    0 references
    0 references
    0 references
    26 May 2020
    0 references
    The paper deals with the weakest possible conditions that optimization problems can be treated by using a recurrent neural network. Convex, generalized convex as well as nonlinear nonconvex problems are considered. The Introduction gives a good overview of the existing results in the literature, followed by four remarks showing that the network used in this paper is really new. Chapter 2 recalls needed definitions. Chapter 3 gives the optimization problem P (finite-dimensional space, convex inequalities, linear equalities with full row rank, the objective function is not necessarily convex or smooth), where some conditions must be fulfilled: Slater conditions, boundedness of the feasible domain, regularity and Lipschitz property of the objective function. Furthermore, the recurrent neural network to solve P is presented being a nonautonomous differential inclusion. Two figures support the mathematical presentation. Chapter 4 gives the theoretical analysis starting with the definition of a critical point of P and the state solution of the network together with its convergence behavior as for instance that the state of neural network enters the feasible region in finite time and remains thereafter, about the distance to the set of critical points and about relations to Kuhn-Tucker points of P and finally, if the objective function is pseudoconvex, then the state of the network is globally convergent to an optimal solution of P. Chapter 5 starts with the definition of a slow solution of a (common) differential inclusion and it is shown, that a solution of the network (with special initial point) is just its slow solution and is unique, if the objective function is convex Five test examples with remarks and figures supplement the paper.
    0 references
    nonlinear optimization problems
    0 references
    recurrent neural network
    0 references
    Lyapunov function
    0 references
    global convergence
    0 references
    critical points
    0 references
    slow solution
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers