state estimation of discrete-time markov jump neural networks with general transition probabilities and output quantization
DOI10.1080/10236198.2017.1368501zbMATH Open1383.37075OpenAlexW2751151068WikidataQ114257364 ScholiaQ114257364MaRDI QIDQ3133501FDOQ3133501
Ahmed Alsaedi, Jinde Cao, R. Sasirekha, Ying Wan, R. Rakkiyappan
Publication date: 2 February 2018
Published in: Journal of Difference Equations and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10236198.2017.1368501
Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) (60J20) Filtering in stochastic control theory (93E11) Neural networks for/in biological studies, artificial life and related topics (92B20) Dynamical systems in control (37N35) (H^infty)-control (93B36)
Cited In (4)
- Moments and distributions of the last exit times for a class of Markov processes
- \(H_\infty\) state estimator design for discrete-time switched neural networks with multiple missing measurements and sojourn probabilities
- LMI-based results on exponential stability of BAM-type neural networks with leakage and both time-varying delays: a non-fragile state estimation approach
- Non-fragile mixed H and passivity control for neural networks with successive time-varying delay components
This page was built for publication: state estimation of discrete-time markov jump neural networks with general transition probabilities and output quantization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3133501)