Safe exploration in model-based reinforcement learning using control barrier functions
DOI10.1016/J.AUTOMATICA.2022.110684zbMATH Open1505.93123arXiv2104.08171OpenAlexW3154507809MaRDI QIDQ2103658FDOQ2103658
Authors: M. Cohen, Calin Belta
Publication date: 9 December 2022
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.08171
Recommendations
- Safe control of nonlinear systems in LPV framework using model-based reinforcement learning
- Temporal logic guided safe model-based reinforcement learning: a hybrid systems approach
- Safe reinforcement learning for continuous spaces through Lyapunov-constrained behavior
- Safe Exploration of State and Action Spaces in Reinforcement Learning
- A comprehensive survey on safe reinforcement learning
Learning and adaptive systems in artificial intelligence (68T05) Dynamic programming in optimal control and differential games (49L20) Nonlinear systems in control theory (93C10) Adaptive control/observation systems (93C40)
Cites Work
- Barrier Lyapunov functions for the control of output-constrained nonlinear systems
- Adaptive nonlinear control without overparametrization
- Switching in systems and control
- Set invariance in control
- Nonlinear systems.
- Adaptive Control Tutorial
- Reinforcement Learning and Feedback Control: Using Natural Decision Methods to Design Optimal Adaptive Controllers
- Online actor-critic algorithm to solve the continuous-time infinite horizon optimal control problem
- Efficient model-based reinforcement learning for approximate online optimal control
- Model-based reinforcement learning for approximate optimal regulation
- Barrier function based model predictive control
- Distributed Coordination Control for Multi-Robot Networks Using Lyapunov-Like Barrier Functions
- Control Barrier Function Based Quadratic Programs for Safety Critical Systems
- A General Safety Framework for Learning-Based Control in Uncertain Robotic Systems
- Reinforcement learning for optimal feedback control. A Lyapunov-based approach
- Robust control barrier functions for constrained stabilization of nonlinear systems
- Data-based reinforcement learning approximate optimal control for an uncertain nonlinear system with control effectiveness faults
- Integral concurrent learning: adaptive control with parameter convergence using finite excitation
- Data-Driven Economic NMPC Using Reinforcement Learning
- Approximate optimal influence over an agent through an uncertain interaction dynamic
- Safe reinforcement learning for dynamical games
- Safe reinforcement learning: A control barrier function optimization approach
Cited In (22)
- Adaptive critic learning for approximate optimal event-triggered tracking control of nonlinear systems with prescribed performances
- Safe reinforcement learning for continuous spaces through Lyapunov-constrained behavior
- An iterative scheme of safe reinforcement learning for nonlinear systems via barrier certificate generation
- Safe Exploration of State and Action Spaces in Reinforcement Learning
- Safety reinforcement learning control via transfer learning
- Safe adaptive output-feedback optimal control of a class of linear systems
- A predictive safety filter for learning-based control of constrained nonlinear dynamical systems
- Reinforcement learning control of constrained dynamic systems with uniformly ultimate boundedness stability guarantee
- A comprehensive survey on safe reinforcement learning
- Temporal logic guided safe model-based reinforcement learning: a hybrid systems approach
- Nonconvex policy search using variational inequalities
- 10.1162/jmlr.2003.3.4-5.803
- Safe robust multi-agent reinforcement learning with neural control barrier functions and safety attention mechanism
- Assured learning-enabled autonomy: a metacognitive reinforcement learning framework
- Learning safe neural network controllers with barrier certificates
- Safety-aware apprenticeship learning
- Safe reinforcement learning: A control barrier function optimization approach
- Safe control of nonlinear systems in LPV framework using model-based reinforcement learning
- Explicit explore, exploit, or escape \((E^4)\): near-optimal safety-constrained reinforcement learning in polynomial time
- Off‐policy model‐based end‐to‐end safe reinforcement learning
- Verifiably Safe Off-Model Reinforcement Learning
- Probabilistic counterexample guidance for safer reinforcement learning
This page was built for publication: Safe exploration in model-based reinforcement learning using control barrier functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2103658)