Optimal model-free output synchronization of heterogeneous systems using off-policy reinforcement learning

From MaRDI portal
Publication:313256


DOI10.1016/j.automatica.2016.05.017zbMath1343.93006OpenAlexW2430619152MaRDI QIDQ313256

Subramanya P. Nageshrao, Gabriel A. Delgado Lopes, Robert Babuška, Hamidreza Modares, Frank L. Lewis

Publication date: 9 September 2016

Published in: Automatica (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/j.automatica.2016.05.017



Related Items

Heterogeneous formation control of multiple rotorcrafts with unknown dynamics by reinforcement learning, Adaptive fuzzy sliding-mode consensus control of nonlinear under-actuated agents in a near-optimal reinforcement learning framework, General value iteration based single network approach for constrained optimal controller design of partially-unknown continuous-time nonlinear systems, Output synchronization of heterogeneous discrete-time systems: a model-free optimal approach, Finite‐time adaptive output synchronization of uncertain nonlinear heterogeneous multi‐agent systems, Adaptive distributed observer for an uncertain leader over acyclic switching digraphs, Optimal robust formation control for heterogeneous multi‐agent systems based on reinforcement learning, Reinforcement learning and cooperative \(H_\infty\) output regulation of linear continuous-time multi-agent systems, Distributed output data-driven optimal robust synchronization of heterogeneous multi-agent systems, Specified convergence rate guaranteed output tracking of discrete-time systems via reinforcement learning, Leader-follower time-varying output formation control of heterogeneous systems under cyber attack with active leader, Optimal output synchronization of heterogeneous multi-agent systems using measured input-output data, ADP‐based robust consensus for multi‐agent systems with unknown dynamics and random uncertain channels, Off-policy learning for adaptive optimal output synchronization of heterogeneous multi-agent systems, \( \mathcal{H}_2\) suboptimal output synchronization of heterogeneous multi-agent systems, Cooperative adaptive optimal output regulation of nonlinear discrete-time multi-agent systems, Observer‐based adaptive optimal output containment control problem of linear heterogeneous Multiagent systems with relative output measurements, Output‐feedback H quadratic tracking control of linear systems using reinforcement learning, Off-policy Q-learning: solving Nash equilibrium of multi-player games with network-induced delay and unmeasured state, Output synchronization control for a class of complex dynamical networks with non-identical dynamics, Reinforcement learning for distributed control and multi-player games, Distributed consensus control for nonlinear multi-agent systems, Adaptive optimal output tracking of continuous-time systems via output-feedback-based reinforcement learning, Adaptive distributed observer for an uncertain leader with an unknown output over directed acyclic graphs, Cooperative output regulation of linear multi-agent systems subject to an uncertain leader system



Cites Work