Stability analysis of Hopfield neural networks with uncertainty (Q5943362)
From MaRDI portal
scientific article; zbMATH DE number 1643032
Language | Label | Description | Also known as |
---|---|---|---|
English | Stability analysis of Hopfield neural networks with uncertainty |
scientific article; zbMATH DE number 1643032 |
Statements
Stability analysis of Hopfield neural networks with uncertainty (English)
0 references
2 December 2002
0 references
Here, stability properties of Hopfield neural networks with uncertainty of the form \[ \dot u_i=\sum^n_{j=1}(T_{ij}+\Delta T_{ij}(m))g_j(u_j)-a_iu_i+k_i,\;i=1,\dots,n, \text{ or }\dot u=(T+\Delta T(m))g(u)-Au+k,\tag{1} \] are studied, with \(u\in\mathbb{R}^n\), \(t\in \mathbb{R}\), \(m\in\mathbb{R}^k\) a constant parameter vector, \(k\in\mathbb{R}^n\) a fixed external input, and the matrices \(T\in\mathbb{R}^{n\times n}\) and \(A\in\mathbb{R}^{n\times n}\) represent the nominal part of the neutral network with \(A=\text{diag}[a_1,\dots,a_n]\), \(a_i>0\), \(i=1,\dots,n\), and the matrix \(\Delta T(m)\in \mathbb{R}^{n\times n}\) denotes the uncertainty in the interconnection weight matrix \(T\). It is supposed that the uncertainty matrix \(\Delta T(m)\) can be represented as \(\Delta T(m)=DF(m)E\), where \(D\in \mathbb{R}^{n\times n}\) and \(E\in \mathbb{R}^{n\times n}\) are known constant matrices and \(F(m)\in\mathbb{R}^{n\times n}\) satisfies the inequality \(F(m)^TF(m)\leq I\), \(\forall m\in\mathcal M\) where \(\mathcal M\) is a bounded and simply connected region in \(\mathbb{R}^\ell\). The uncertain neural network (1) is globally parametrically asymptotically stable (GPA-stable) if for any \(m\in \mathcal M\), (i) there exists a unique equilibrium point \(u^*(m)\) of (1); (ii) the equilibrium point \(u^*(m)\) is globally asymptotically stable. The authors state sufficient conditions which ensure the existence of a GPA-stable equilibrium point and are based on the individual entries of the matrices \(T\), \(D\), \(F(m)\), \(E\) and \(A\). One of the main results shows that, given a symmetric positive-definite matrix \(Q\in\mathbb{R}^{n\times n}\) and supposing that constants \(\varepsilon>0\), \(\sigma>0\) exist such that the equation \[ -PA-AP+P[\varepsilon TT^T+\sigma DD^T]P+bI+Q=0 \] has a symmetric positive-definite solution \(P\) with \(b=\lambda_{\max}[\frac 1\varepsilon I+\frac 1\sigma E^TE](\max_{1\leq i\leq n}\{g'(0)\})^2\), and \(\lambda_{\max}[B]\) denotes the largest eigenvalues of \(B\), then there exists a GPA-stable equilibrium point \(u^*(m)\). Another result states that, if \(T\) is diagonally dominant with negative diagonal element \[ \|E\|_1\sum^n_{k=1}|D_{ik}|<|T_{ii}|-\sum_{j\neq i}|T_{ij}|,\;i=1,\dots,n,\tag{2} \] then for all \(m\in \mathcal M\) there exists a GPA-stable equilibrium point \(u^*(m)\) with \[ \|E\|_1=\sum^n_{i=j}\sum^n_{j=1}|E_{ij}|. \] A similar result holds if the condition (2) is replaced by \[ \|D\|_1\sum^n_{i=1}|E_{ij}|<T_{jj}-\sum_{i\neq j} |T_{ij}|, j=1,\dots,n. \]
0 references
Hopfield neural network
0 references
stability
0 references
uncertainty
0 references
uncertain neural network
0 references
0 references