Multi-objective reinforcement learning through continuous Pareto manifold approximation
From MaRDI portal
Publication:2829188
DOI10.1613/JAIR.4961zbMATH Open1386.68137OpenAlexW2535247013MaRDI QIDQ2829188FDOQ2829188
Authors: Simone Parisi, Matteo Pirotta, Marcello Restelli
Publication date: 27 October 2016
Published in: The Journal of Artificial Intelligence Research (JAIR) (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1613/jair.4961
Recommendations
- Multi-objective reinforcement learning using sets of Pareto dominating policies
- Efficient multi-objective reinforcement learning via multiple-gradient descent with iteratively discovered weight-vector sets
- A survey of multi-objective sequential decision-making
- Multi-Objective Decision Making
- scientific article; zbMATH DE number 2079783
Learning and adaptive systems in artificial intelligence (68T05) Multi-objective and goal programming (90C29)
Cited In (8)
- Efficient Multi-objective Reinforcement Learning via Multiple-gradient Descent with Iteratively Discovered Weight-Vector Sets
- Necessary and sufficient Karush-Kuhn-Tucker conditions for multiobjective Markov chains optimality
- Using the Manhattan distance for computing the multiobjective Markov chains problem
- Computing multiobjective Markov chains handled by the extraproximal method
- Multi-condition multi-objective optimization using deep reinforcement learning
- The hard lessons and shifting modeling trends of COVID-19 dynamics: multiresolution modeling approach
- Title not available (Why is that?)
- Title not available (Why is that?)
This page was built for publication: Multi-objective reinforcement learning through continuous Pareto manifold approximation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2829188)