Pretty darn good control: when are approximate solutions better than approximate models
DOI10.1007/S11538-023-01198-5zbMATH Open1522.91174arXiv2308.13654MaRDI QIDQ6077321FDOQ6077321
Authors: Felipe Montealegre-Mora, Marcus Lapeyrolerie, Melissa Chapman, Abigail G. Keller, Carl Boettiger
Publication date: 25 September 2023
Published in: Bulletin of Mathematical Biology (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2308.13654
Recommendations
- Deep neural networks algorithms for stochastic control problems on finite horizon: convergence analysis
- Approximate policy optimization and adaptive control in regression models
- Model-based reinforcement learning for approximate optimal regulation
- Controlled interacting particle algorithms for simulation-based reinforcement learning
- A projected primal-dual gradient optimal control method for deep reinforcement learning
Artificial neural networks and deep learning (68T07) Environmental economics (natural resource models, harvesting, pollution, etc.) (91B76) Population dynamics (general) (92D25)
Cites Work
Cited In (2)
This page was built for publication: Pretty darn good control: when are approximate solutions better than approximate models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6077321)