A framework for controllable Pareto front learning with completed scalarization functions and its applications
From MaRDI portal
Publication:6148441
DOI10.1016/J.NEUNET.2023.10.029arXiv2302.12487MaRDI QIDQ6148441FDOQ6148441
Authors: Tran Anh Tuan, Long P. Hoang, Dung D. le, Tran Ngoc Thang
Publication date: 11 January 2024
Published in: Neural Networks (Search for Journal in Brave)
Abstract: Pareto Front Learning (PFL) was recently introduced as an efficient method for approximating the entire Pareto front, the set of all optimal solutions to a Multi-Objective Optimization (MOO) problem. In the previous work, the mapping between a preference vector and a Pareto optimal solution is still ambiguous, rendering its results. This study demonstrates the convergence and completion aspects of solving MOO with pseudoconvex scalarization functions and combines them into hypernetwork in order to offer a comprehensive framework for PFL, called Controllable Pareto Front Learning. Extensive experiments demonstrate that our approach is highly accurate and significantly less computationally expensive than traditional methods.
Full work available at URL: https://arxiv.org/abs/2302.12487
Recommendations
- Multi-objective reinforcement learning through continuous Pareto manifold approximation
- A flexible objective-constraint approach and a new algorithm for constructing the Pareto front of multiobjective optimization problems
- A numerical method for constructing the Pareto front of multi-objective optimization problems
- Dynamic algorithm selection for Pareto optimal set approximation
- Pareto front approximation through a multi-objective augmented Lagrangian method
multi-task learningmulti-objective optimizationhypernetworkPareto front learningscalarization problem
Cites Work
- Title not available (Why is that?)
- Convex analysis and monotone operator theory in Hilbert spaces
- Convex Analysis
- Theory of multiobjective optimization
- Voronoi diagrams and arrangements
- Title not available (Why is that?)
- Approximation by superpositions of a sigmoidal function
- Title not available (Why is that?)
- Title not available (Why is that?)
- An outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiple objective linear programming problem
- Multiple-gradient descent algorithm (MGDA) for multiobjective optimization
- Generalized Concavity
- Neural network for nonsmooth pseudoconvex optimization with general convex constraints
- A one-layer recurrent neural network for nonsmooth pseudoconvex optimization with quasiconvex inequality and affine equality constraints
- Outcome space algorithm for generalized multiplicative problems and optimization over the efficient set
- A neurodynamic approach to nonsmooth constrained pseudoconvex optimization problem
- Optimizing over Pareto set of semistrictly quasiconcave vector maximization and application to stochastic portfolio selection
- Solving generalized convex multiobjective programming problems by a normal direction method
- Efficient retrieval of matrix factorization-based top-\(k\) recommendations: a survey of recent approaches
This page was built for publication: A framework for controllable Pareto front learning with completed scalarization functions and its applications
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6148441)