CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization
From MaRDI portal
Publication:2352415
DOI10.1007/s10589-014-9687-3zbMath1325.90004OpenAlexW2057565823WikidataQ58185665 ScholiaQ58185665MaRDI QIDQ2352415
Dominique Orban, Nicholas I. M. Gould, Phillipe L. Toint
Publication date: 1 July 2015
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-014-9687-3
Related Items
A new subspace minimization conjugate gradient method based on tensor model for unconstrained optimization, On the numerical performance of finite-difference-based methods for derivative-free optimization, Full-low evaluation methods for derivative-free optimization, An inexact first-order method for constrained nonlinear optimization, On the complexity of solving feasibility problems with regularized models, Efficient Preconditioners for Interior Point Methods via a New Schur Complement-Based Strategy, A competitive inexact nonmonotone filter SQP method: convergence analysis and numerical results, An efficient hybrid conjugate gradient method with sufficient descent property for unconstrained optimization, A symmetric grouped and ordered multi-secant Quasi-Newton update formula, A Computational Study of Using Black-box QR Solvers for Large-scale Sparse-dense Linear Least Squares Problems, Exploiting Problem Structure in Derivative Free Optimization, Adaptive Finite-Difference Interval Estimation for Noisy Derivative-Free Optimization, Scalable subspace methods for derivative-free nonlinear least-squares optimization, An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians, Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization, Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming, Detecting negative eigenvalues of exact and approximate Hessian matrices in optimization, On Using Cholesky-Based Factorizations and Regularization for Solving Rank-Deficient Sparse Linear Least-Squares Problems, Direct Search Based on Probabilistic Descent in Reduced Spaces, Optimization by moving ridge functions: derivative-free optimization for computationally intensive functions, Regularization of limited memory quasi-Newton methods for large-scale nonconvex minimization, A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization, A new multipoint symmetric secant method with a dense initial matrix, A merit function approach for evolution strategies, A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression, A new subspace minimization conjugate gradient method for unconstrained minimization, Competitive secant (BFGS) methods based on modified secant relations for unconstrained optimization, Constrained Optimization in the Presence of Noise, The regularization continuation method for optimization problems with nonlinear equality constraints, A Class of Approximate Inverse Preconditioners Based on Krylov-Subspace Methods for Large-Scale Nonconvex Optimization, Unnamed Item, Gradient methods exploiting spectral properties, Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization, A Regularized Factorization-Free Method for Equality-Constrained Optimization, A globally convergent gradient-like method based on the Armijo line search, An efficient hybrid conjugate gradient method for unconstrained optimization, A sequential quadratic programming algorithm for equality-constrained optimization without derivatives, Improving the Flexibility and Robustness of Model-based Derivative-free Optimization Solvers, Error estimates for iterative algorithms for minimizing regularized quadratic subproblems, The Conjugate Residual Method in Linesearch and Trust-Region Methods, A Derivative-Free Method for Structured Optimization Problems, A Tridiagonalization Method for Symmetric Saddle-Point Systems, Unnamed Item, Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization, Complexity and performance of an Augmented Lagrangian algorithm, A one-parameter class of three-term conjugate gradient methods with an adaptive parameter choice, Equipping the Barzilai--Borwein Method with the Two Dimensional Quadratic Termination Property, Unnamed Item, Solving the Cubic Regularization Model by a Nested Restarting Lanczos Method, Adaptive trust-region algorithms for unconstrained optimization, The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization, A NEW DERIVATIVE-FREE CONJUGATE GRADIENT METHOD FOR LARGE-SCALE NONLINEAR SYSTEMS OF EQUATIONS, A Noise-Tolerant Quasi-Newton Algorithm for Unconstrained Optimization, Dynamic scaling in the mesh adaptive direct search algorithm for blackbox optimization, A two-stage active-set algorithm for bound-constrained optimization, A limited memory quasi-Newton trust-region method for box constrained optimization, A progressive barrier derivative-free trust-region algorithm for constrained optimization, A new nonmonotone adaptive trust region algorithm., A Schur complement approach to preconditioning sparse linear least-squares problems with some dense rows, An active-set algorithm for norm constrained quadratic problems, Sequential equality-constrained optimization for nonlinear programming, Algebraic rules for computing the regularization parameter of the Levenberg-Marquardt method, An augmented Lagrangian method exploiting an active-set strategy and second-order information, A Solver for Nonconvex Bound-Constrained Quadratic Optimization, Primal and dual active-set methods for convex quadratic programming, Two globally convergent nonmonotone trust-region methods for unconstrained optimization, LMBOPT: a limited memory method for bound-constrained optimization, Efficient unconstrained black box optimization, Best practices for comparing optimization algorithms, Limited-memory BFGS with displacement aggregation, A primal-dual augmented Lagrangian penalty-interior-point filter line search algorithm, Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm, Iteratively sampling scheme for stochastic optimization with variable number sample path, Non-monotone algorithm for minimization on arbitrary domains with applications to large-scale orthogonal Procrustes problem, A null-space approach for large-scale symmetric saddle point systems with a small and non zero \((2, 2)\) block, On the update of constraint preconditioners for regularized KKT systems, A novel class of approximate inverse preconditioners for large positive definite linear systems in optimization, An active set trust-region method for bound-constrained optimization, Learning to steer nonlinear interior-point methods, An extended nonmonotone line search technique for large-scale unconstrained optimization, A Feasible Active Set Method for Strictly Convex Quadratic Problems with Simple Bounds, A new subspace minimization conjugate gradient method with nonmonotone line search for unconstrained optimization, Updating Constraint Preconditioners for KKT Systems in Quadratic Programming Via Low-Rank Corrections, A Nonmonotone Filter SQP Method: Local Convergence and Numerical Results, An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization, Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation, Two new Dai-Liao-type conjugate gradient methods for unconstrained optimization problems, A novel hybrid trust region algorithm based on nonmonotone and LOOCV techniques, Direct search based on probabilistic feasible descent for bound and linearly constrained problems, A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization, A new restarting adaptive trust-region method for unconstrained optimization, Solving Mixed Sparse-Dense Linear Least-Squares Problems by Preconditioned Iterative Methods, A note on solving nonlinear optimization problems in variable precision, A regularization method for constrained nonlinear least squares, Preconditioned nonlinear conjugate gradient methods based on a modified secant equation, Iterative grossone-based computation of negative curvature directions in large-scale optimization, Exploiting negative curvature in deterministic and stochastic optimization, Modeling approaches for addressing unrelaxable bound constraints with unconstrained optimization methods, Two-step conjugate gradient method for unconstrained optimization, A limited-memory trust-region method for nonlinear optimization with many equality constraints, Secant penalized BFGS: a noise robust quasi-Newton method via penalizing the secant condition, A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization, An augmented Lagrangian filter method, Scaled projected-directions methods with application to transmission tomography, A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis, Sequential quadratic programming methods for parametric nonlinear optimization, Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities, Implementing a Smooth Exact Penalty Function for Equality-Constrained Nonlinear Optimization, A new class of conjugate gradient methods for unconstrained smooth optimization and absolute value equations, Primal-dual active-set methods for large-scale optimization, trlib: a vector-free implementation of the GLTR method for iterative solution of the trust region problem, On the solution of linearly constrained optimization problems by means of barrier algorithms, On Regularization and Active-set Methods with Complexity for Constrained Optimization, Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points, An accelerated nonmonotone trust region method with adaptive trust region for unconstrained optimization, Exploiting damped techniques for nonlinear conjugate gradient methods, An improved adaptive trust-region algorithm, An inexact proximal regularization method for unconstrained optimization, Global optimization test problems based on random field composition, Novel preconditioners based on quasi-Newton updates for nonlinear conjugate gradient methods, A stabilized SQP method: superlinear convergence, High-order evaluation complexity for convexly-constrained optimization with non-Lipschitzian group sparsity terms, An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem, Issues on the use of a modified bunch and Kaufman decomposition for large scale Newton's equation, Backward Step Control for Global Newton-Type Methods, A new augmented Lagrangian method for equality constrained optimization with simple unconstrained subproblem, Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates, Approximate solution of system of equations arising in interior-point methods for bound-constrained optimization, Secant update generalized version of PSB: a new approach, On efficiency of nonmonotone Armijo-type line searches, A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds, A new regularized quasi-Newton method for unconstrained optimization, An accelerated first-order method with complexity analysis for solving cubic regularization subproblems, Exact linesearch limited-memory quasi-Newton methods for minimizing a quadratic function, A derivative-free Gauss-Newton method, A second-order globally convergent direct-search method and its worst-case complexity, An efficient Dai-Liao type conjugate gradient method by reformulating the CG parameter in the search direction equation, Adaptive augmented Lagrangian methods: algorithms and practical numerical experience, Numerical experience with a derivative-free trust-funnel method for nonlinear optimization problems with general nonlinear constraints, Null-Space Preconditioners for Saddle Point Systems, CUTEst, An infeasible interior-point arc-search algorithm for nonlinear constrained optimization, Preconditioning of Linear Least Squares by Robust Incomplete Factorization for Implicitly Held Normal Equations, A new conjugate gradient method with an efficient memory structure, A subspace SQP method for equality constrained optimization, An improved hybrid-ORBIT algorithm based on point sorting and MLE technique, On global minimizers of quadratic functions with cubic regularization, Block preconditioners for linear systems in interior point methods for convex constrained optimization, QPALM: a proximal augmented Lagrangian method for nonconvex quadratic programs, Linear equalities in blackbox optimization, Diagonal BFGS updates and applications to the limited memory BFGS method, A derivative-free exact penalty algorithm: basic ideas, convergence theory and computational studies, Methods for convex and general quadratic programming
Uses Software
Cites Work
- Methods for convex and general quadratic programming
- Numerical comparison of augmented Lagrangian algorithms for nonconvex problems
- Algorithm 909
- CUTE
- PENNON: A code for convex nonlinear and semidefinite programming
- A repository of convex quadratic programming problems
- A Sequential Linear Constraint Programming Algorithm for NLP
- Algorithm 813
- GALAHAD, a library of thread-safe Fortran 90 packages for large-scale nonlinear optimization
- CUTEr and SifDec
- Unnamed Item
- Unnamed Item