Q-learning-based model predictive variable impedance control for physical human-robot collaboration
DOI10.1016/J.ARTINT.2022.103771OpenAlexW4291006859MaRDI QIDQ2093371FDOQ2093371
Authors: Loris Roveda, Andrea Testa, Asad Ali Shahid, Francesco Braghin, Dario Piga
Publication date: 8 November 2022
Published in: Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.artint.2022.103771
machine learningstabilityneural networksQ-learningindustry 4.0model-based reinforcement learning controlphysical human-robot collaborationvariable impedance control
Cites Work
- \({\mathcal Q}\)-learning
- A `universal' construction of Artstein's theorem on nonlinear stabilization
- Stabilization with relaxed controls
- Stabilization of nonlinear systems with state and control constraints using Lyapunov-based predictive control
- Predictive control of switched nonlinear systems with scheduled mode transitions
- Lyapunov stability theory of nonsmooth systems
- Unconstrained receding-horizon control of nonlinear systems
- Nonlinear Systems Analysis
- Modelling and control of robot manipulators.
- Inverse Optimality in Robust Stabilization
- On the stability of receding horizon control with a general terminal cost
- A single network adaptive critic (SNAC) architecture for optimal control synthesis for a class of nonlinear systems
- Lyapunov-Based Model Predictive Control of Nonlinear Systems Subject to Data Losses
- A receding horizon generalization of pointwise min-norm controllers
- Nonlinear model predictive control. Theory and algorithms
Uses Software
This page was built for publication: Q-learning-based model predictive variable impedance control for physical human-robot collaboration
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2093371)