Multi-objective reinforcement learning through continuous Pareto manifold approximation
From MaRDI portal
Publication:2829188
DOI10.1613/JAIR.4961zbMATH Open1386.68137OpenAlexW2535247013MaRDI QIDQ2829188FDOQ2829188
Matteo Pirotta, Marcello Restelli, Simone Parisi
Publication date: 27 October 2016
Published in: The Journal of Artificial Intelligence Research (JAIR) (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1613/jair.4961
Learning and adaptive systems in artificial intelligence (68T05) Multi-objective and goal programming (90C29)
Cited In (8)
- Efficient Multi-objective Reinforcement Learning via Multiple-gradient Descent with Iteratively Discovered Weight-Vector Sets
- Necessary and sufficient Karush-Kuhn-Tucker conditions for multiobjective Markov chains optimality
- Using the Manhattan distance for computing the multiobjective Markov chains problem
- Computing multiobjective Markov chains handled by the extraproximal method
- Multi-condition multi-objective optimization using deep reinforcement learning
- The hard lessons and shifting modeling trends of COVID-19 dynamics: multiresolution modeling approach
- Title not available (Why is that?)
- Title not available (Why is that?)
This page was built for publication: Multi-objective reinforcement learning through continuous Pareto manifold approximation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2829188)