TD3-BC-PPO: twin delayed DDPG-based and behavior cloning-enhanced proximal policy optimization for dynamic optimization affine formation
From MaRDI portal
Publication:6579244
DOI10.1016/J.JFRANKLIN.2024.107018zbMATH Open1543.93194MaRDI QIDQ6579244FDOQ6579244
Authors: Xinyu Xu, Y. Y. Chen, Tianrun Liu
Publication date: 25 July 2024
Published in: Journal of the Franklin Institute (Search for Journal in Brave)
Recommendations
- Optimal robust formation control for heterogeneous multi‐agent systems based on reinforcement learning
- Heterogeneous optimal formation control of nonlinear multi-agent systems with unknown dynamics by safe reinforcement learning
- Performance‐guaranteed containment control for pure‐feedback multi agent systems via reinforcement learning algorithm
- Optimal antisynchronization control for unknown multiagent systems with deep deterministic policy gradient approach
Artificial neural networks and deep learning (68T07) Adaptive control/observation systems (93C40) Multi-agent systems (93A16)
Cites Work
- Affine Formation Maneuver Control of Multiagent Systems
- Necessary and Sufficient Graphical Conditions for Affine Formation Control
- Optimal dynamic formation control of multi-agent systems in constrained environments
- Affine formation maneuver control of high-order multi-agent systems over directed networks
- Adaptive Formation Tracking Control for First-Order Agents With a Time-Varying Flow Parameter
- Event-triggered affine formation maneuver control for second-order multi-agent systems with sampled data
- Collision avoidance control for limited perception unmanned surface vehicle swarm based on proximal policy optimization
- Improved DRL-based energy-efficient UAV control for maximum lifecycle
This page was built for publication: TD3-BC-PPO: twin delayed DDPG-based and behavior cloning-enhanced proximal policy optimization for dynamic optimization affine formation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6579244)