A framework for controllable Pareto front learning with completed scalarization functions and its applications

From MaRDI portal
Publication:6148441

DOI10.1016/J.NEUNET.2023.10.029arXiv2302.12487MaRDI QIDQ6148441FDOQ6148441


Authors: Tran Anh Tuan, Long P. Hoang, Dung D. le, Tran Ngoc Thang Edit this on Wikidata


Publication date: 11 January 2024

Published in: Neural Networks (Search for Journal in Brave)

Abstract: Pareto Front Learning (PFL) was recently introduced as an efficient method for approximating the entire Pareto front, the set of all optimal solutions to a Multi-Objective Optimization (MOO) problem. In the previous work, the mapping between a preference vector and a Pareto optimal solution is still ambiguous, rendering its results. This study demonstrates the convergence and completion aspects of solving MOO with pseudoconvex scalarization functions and combines them into hypernetwork in order to offer a comprehensive framework for PFL, called Controllable Pareto Front Learning. Extensive experiments demonstrate that our approach is highly accurate and significantly less computationally expensive than traditional methods.


Full work available at URL: https://arxiv.org/abs/2302.12487




Recommendations




Cites Work






This page was built for publication: A framework for controllable Pareto front learning with completed scalarization functions and its applications

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6148441)