Global convergence of natural policy gradient with Hessian-aided momentum variance reduction
DOI10.1007/S10915-024-02688-XMaRDI QIDQ6629222FDOQ6629222
Authors: Jie Feng, Ke Wei, Jinchi Chen
Publication date: 29 October 2024
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Recommendations
- On linear and super-linear convergence of natural policy gradient algorithm
- Fast global convergence of natural policy gradient methods with entropy regularization
- Geometry and convergence of natural policy gradient methods
- Approximate Newton Policy Gradient Algorithms
- Global convergence of policy gradient methods to (almost) locally optimal policies
Nonconvex programming, global optimization (90C26) Methods of quasi-Newton type (90C53) Computational methods for problems pertaining to operations research and mathematical programming (90-08) Stochastic systems in control theory (general) (93E03) Stochastic learning and adaptive control (93E35)
Cites Work
- OnActor-Critic Algorithms
- Reinforcement learning. An introduction
- Title not available (Why is that?)
- Global convergence of policy gradient methods to (almost) locally optimal policies
- Policy mirror descent for reinforcement learning: linear convergence, new sampling complexity, and generalized problem classes
- Fast global convergence of natural policy gradient methods with entropy regularization
- Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence
- Smoothing policies and safe policy gradients
- Homotopic policy mirror descent: policy convergence, algorithmic regularization, and improved sample complexity
Cited In (1)
This page was built for publication: Global convergence of natural policy gradient with Hessian-aided momentum variance reduction
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6629222)