Model-free LQR design by Q-function learning
From MaRDI portal
Publication:2071928
DOI10.1016/j.automatica.2021.110060zbMath1485.93031OpenAlexW4200561738MaRDI QIDQ2071928
Maryam Babazadeh, Milad Farjadnasab
Publication date: 31 January 2022
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.automatica.2021.110060
convex optimizationdistributed controlQ-learningsemi-definite programming (SDP)linear quadratic regulation (LQR)
Semidefinite programming (90C22) Convex programming (90C25) Feedback control (93B52) Linear systems in control theory (93C05) Linear-quadratic optimal control problems (49N10) Large-scale systems (93A15)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Robust controllability assessment and optimal actuator placement in dynamic networks
- A note on persistency of excitation
- Linear Matrix Inequalities in System and Control Theory
- Linear matrix inequalities, Riccati equations, and indefinite stochastic linear quadratic controls
- Sparsity Promotion in State Feedback Controller Design
- Adaptive Leader–Follower Synchronization Over Heterogeneous and Uncertain Networks of Linear Systems Without Distributed Observer
- Formulas for Data-Driven Control: Stabilization, Optimality, and Robustness
- Primal-Dual Q-Learning Framework for LQR Design
- Distributed Adaptive Control of Synchronization in Complex Networks
- Design of Optimal Sparse Feedback Gains via the Alternating Direction Method of Multipliers
- On the Construction and Comparison of Difference Schemes
This page was built for publication: Model-free LQR design by Q-function learning