Global convergence of natural policy gradient with Hessian-aided momentum variance reduction
From MaRDI portal
Publication:6629222
Recommendations
- On linear and super-linear convergence of natural policy gradient algorithm
- Fast global convergence of natural policy gradient methods with entropy regularization
- Geometry and convergence of natural policy gradient methods
- Approximate Newton Policy Gradient Algorithms
- Global convergence of policy gradient methods to (almost) locally optimal policies
Cites work
- scientific article; zbMATH DE number 7370615 (Why is no real title available?)
- Fast global convergence of natural policy gradient methods with entropy regularization
- Global convergence of policy gradient methods to (almost) locally optimal policies
- Homotopic policy mirror descent: policy convergence, algorithmic regularization, and improved sample complexity
- OnActor-Critic Algorithms
- Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence
- Policy mirror descent for reinforcement learning: linear convergence, new sampling complexity, and generalized problem classes
- Reinforcement learning. An introduction
- Smoothing policies and safe policy gradients
This page was built for publication: Global convergence of natural policy gradient with Hessian-aided momentum variance reduction
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6629222)