Tighter convex underestimator for general twice differentiable function for global optimization
From MaRDI portal
Publication:6667337
DOI10.1051/RO/2024176MaRDI QIDQ6667337FDOQ6667337
Authors: Djamel Zerrouki, Mohand Ouanes
Publication date: 20 January 2025
Published in: RAIRO. Operations Research (Search for Journal in Brave)
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Semi-infinite programming (90C34)
Cites Work
- \(\alpha BB\): A global optimization method for general constrained nonconvex problems
- The DC (Difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems
- Global optimization using interval analysis - the multi-dimensional case
- On the efficient Gerschgorin inclusion usage in the global optimization \(\alpha\)BB method
- A new class of improved convex underestimators for twice continuously differentiable constrained NLPs
- Convex underestimation of twice continuously differentiable functions by piecewise quadratic perturbation: spline \(\alpha\)BB underestimators
- An efficient combined DCA and B\&B using DC/SDP relaxation for globally solving binary quadratic programs
- Tight convex underestimators for \({\mathcal{C}^2}\)-continuous problems. II: Multivariate functions
- Rigorous convex underestimators for general twice-differentiable problems
- New quadratic lower bound for multivariate functions in global optimization
- Performance of convex underestimators in a branch-and-bound framework
- Tighter \(\alpha \mathrm{BB}\) relaxations through a refinement scheme for the scaled Gerschgorin theorem
This page was built for publication: Tighter convex underestimator for general twice differentiable function for global optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6667337)