Discrete-time Zhang neural networks for time-varying nonlinear optimization (Q2296503)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Discrete-time Zhang neural networks for time-varying nonlinear optimization
scientific article

    Statements

    Discrete-time Zhang neural networks for time-varying nonlinear optimization (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    18 February 2020
    0 references
    Summary: As a special kind of recurrent neural networks, Zhang neural network (ZNN) has been successfully applied to various time-variant problems solving. In this paper, we present three Zhang et al. discretization (ZeaD) formulas, including a special two-step ZeaD formula, a general two-step ZeaD formula, and a general five-step ZeaD formula, and prove that the special and general two-step ZeaD formulas are convergent while the general five-step ZeaD formula is not zero-stable and thus is divergent. Then, to solve the time-varying nonlinear optimization (TVNO) in real time, based on the Taylor series expansion and the above two convergent two-step ZeaD formulas, we discrete the continuous-time ZNN (CTZNN) model of TVNO and thus get a special two-step discrete-time ZNN (DTZNN) model and a general two-step DTZNN model. Theoretical analyses indicate that the sequence generated by the first DTZNN model is divergent, while the sequence generated by the second DTZNN model is convergent. Furthermore, for the step-size of the second DTZNN model, its tight upper bound and the optimal step-size are also discussed. Finally, some numerical results and comparisons are provided and analyzed to substantiate the efficacy of the proposed DTZNN models.
    0 references
    0 references
    0 references