On high-order model regularization for multiobjective optimization
From MaRDI portal
Publication:5038176
DOI10.1080/10556788.2020.1719408zbMATH Open1501.90088OpenAlexW3005116386MaRDI QIDQ5038176FDOQ5038176
Author name not available (Why is that?)
Publication date: 29 September 2022
Published in: Optimization Methods \& Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556788.2020.1719408
Recommendations
- On High-order Model Regularization for Constrained Optimization
- Conical regularization for multi-objective optimization problems
- Convergence analysis of Tikhonov-type regularization algorithms for multiobjective optimization problems
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
Multi-objective and goal programming (90C29) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- Testing Unconstrained Optimization Software
- Nonlinear multiobjective optimization
- Title not available (Why is that?)
- Scalarizing vector optimization problems
- Proper efficiency and the theory of vector maximization
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Adaptive Scalarization Methods in Multiobjective Optimization
- Introduction to Shape Optimization
- Multiobjective optimization. Interactive and evolutionary approaches
- Optimization over the efficient set of multi-objective convex optimal control problems
- Steepest descent methods for multicriteria optimization.
- A projected gradient method for vector optimization problems
- On the convergence of the projected gradient method for vector optimization
- Newton's Method for Multiobjective Optimization
- A quadratically convergent Newton method for vector optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- A new scalarization and numerical method for constructing the weak Pareto front of multi-objective optimization problems
- An inexact restoration approach to optimization problems with multiobjective constraints under weighted-sum scalarization
- A new scalarization technique to approximate Pareto fronts of problems with disconnected feasible sets
- Optimization over the efficient set
- Existence theorems in vector optimization
- Cubic regularization of Newton method and its global performance
- Nonmonotone algorithm for minimization on closed sets with applications to minimization on Stiefel manifolds
- Density-based globally convergent trust-region methods for self-consistent field electronic structure calculations
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
- On High-order Model Regularization for Constrained Optimization
- Nonlinear Conjugate Gradient Methods for Vector Optimization
- Complexity of gradient descent for multiobjective optimization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
Cited In (5)
- On High-order Model Regularization for Constrained Optimization
- Worst-case complexity bounds of directional direct-search methods for multiobjective optimization
- Convergence rates analysis of a multiobjective proximal gradient method
- Universal nonmonotone line search method for nonconvex multiobjective optimization problems with convex constraints
- Complexity bound of trust-region methods for convex smooth unconstrained multiobjective optimization
Uses Software
This page was built for publication: On high-order model regularization for multiobjective optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5038176)