Complexity of gradient descent for multiobjective optimization
From MaRDI portal
Publication:5198047
DOI10.1080/10556788.2018.1510928zbMath1429.90067OpenAlexW2889351029WikidataQ129318439 ScholiaQ129318439MaRDI QIDQ5198047
Jörg Fliege, A. Ismael F. Vaz, Luis Nunes Vicente
Publication date: 2 October 2019
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://eprints.soton.ac.uk/423833/1/wcc_moo.pdf
Abstract computational complexity for mathematical programming problems (90C60) Multi-objective and goal programming (90C29)
Related Items (18)
An incremental descent method for multi-objective optimization ⋮ On high-order model regularization for multiobjective optimization ⋮ On \(q\)-steepest descent method for unconstrained multiobjective optimization problems ⋮ Hypervolume scalarization for shape optimization to improve reliability and cost of ceramic components ⋮ A nonmonotone gradient method for constrained multiobjective optimization problems ⋮ An accelerated proximal gradient method for multiobjective optimization ⋮ Complexity bound of trust-region methods for convex smooth unconstrained multiobjective optimization ⋮ Adaptive sampling stochastic multigradient algorithm for stochastic multiobjective optimization ⋮ A Barzilai-Borwein descent method for multiobjective optimization problems ⋮ Convergence rates analysis of a multiobjective proximal gradient method ⋮ Inexact gradient projection method with relative error tolerance ⋮ Memory gradient method for multiobjective optimization ⋮ Gradient based biobjective shape optimization to improve reliability and cost of ceramic components ⋮ Worst-case complexity bounds of directional direct-search methods for multiobjective optimization ⋮ Conditional gradient method for multiobjective optimization ⋮ On efficiency of a single variable bi-objective optimization algorithm ⋮ Iteration-complexity and asymptotic analysis of steepest descent method for multiobjective optimization on Riemannian manifolds ⋮ Accuracy and fairness trade-offs in machine learning: a stochastic multi-objective approach
Cites Work
- Unnamed Item
- Trust region globalization strategy for the nonconvex unconstrained multiobjective optimization problem
- A continuous gradient-like dynamical approach to Pareto-optimization in Hilbert spaces
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Worst case complexity of direct search
- Existence theorems in vector optimization
- Steepest descent methods for multicriteria optimization.
- Introductory lectures on convex optimization. A basic course.
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Newton's Method for Multiobjective Optimization
- First-Order Methods in Optimization
- Nonlinear Conjugate Gradient Methods for Vector Optimization
- Proximal Methods in Vector Optimization
This page was built for publication: Complexity of gradient descent for multiobjective optimization