SifDec
From MaRDI portal
Software:16274
swMATH4087MaRDI QIDQ16274FDOQ16274
Author name not available (Why is that?)
Cited In (only showing first 100 items - show all)
- A regularized Newton method without line search for unconstrained optimization
- A matrix-free approach to build band preconditioners for large-scale bound-constrained optimization
- Sobolev seminorm of quadratic functions with applications to derivative-free optimization
- Efficient use of parallelism in algorithmic parameter optimization applications
- Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian
- An Algebraic Analysis of a Block Diagonal Preconditioner for Saddle Point Systems
- Inverse problems and solution methods for a class of nonlinear complementarity problems
- Modified limited memory BFGS method with nonmonotone line search for unconstrained optimization
- Combining and scaling descent and negative curvature directions
- Two extensions of the Dai-Liao method with sufficient descent property based on a penalization scheme
- A simple primal-dual feasible interior-point method for nonlinear programming with monotone descent
- Interior-point methods for nonconvex nonlinear programming: Regularization and warmstarts
- A modified conjugate gradient method based on a modified secant equation
- The global proof of the Polak-Ribière-Polak algorithm under the YWL inexact line search technique
- A penalty-interior-point algorithm for nonlinear constrained optimization
- A curvilinear method based on minimal-memory BFGS updates
- On the iterative solution of KKT systems in potential reduction software for large-scale quadratic problems
- A method combining norm-relaxed QP subproblems with systems of linear equations for constrained optimization
- Trajectory-following methods for large-scale degenerate convex quadratic programming
- A new regularized quasi-Newton algorithm for unconstrained optimization
- On efficiently combining limited-memory and trust-region techniques
- Global convergence of BFGS and PRP methods under a modified weak Wolfe-Powell line search
- A new efficient conjugate gradient method for unconstrained optimization
- Nonlinearly Constrained Optimization Using Heuristic Penalty Methods and Asynchronous Parallel Generating Set Search
- Optimality properties of an augmented Lagrangian method on infeasible problems
- Starting-point strategies for an infeasible potential reduction method
- Trust-region methods without using derivatives: worst case complexity and the nonsmooth case
- Adaptive Barrier Update Strategies for Nonlinear Interior Methods
- A nonmonotone hybrid conjugate gradient method for unconstrained optimization
- An Inexact SQP Method for Equality Constrained Optimization
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- A second derivative SQP method: global convergence
- Nonconvex optimization using negative curvature within a modified linesearch
- Line search filter inexact secant methods for nonlinear equality constrained optimization
- An improved nonlinear conjugate gradient method with an optimal property
- Implementing a Smooth Exact Penalty Function for General Constrained Nonlinear Optimization
- Nonmonotone projected gradient methods based on barrier and Euclidean distances
- Optimizing partially separable functions without derivatives
- A subspace implementation of quasi-Newton trust region methods for unconstrained optimization
- Study of a primal-dual algorithm for equality constrained minimization
- Outer trust-region method for constrained optimization
- A nonmonotone filter Barzilai-Borwein method for optimization
- A truncated Newton method in an augmented Lagrangian framework for nonlinear programming
- A superlinearly convergent method of quasi-strongly sub-feasible directions with active set identifying for constrained optimization
- Erratum to: ``Nonlinear programming without a penalty function or a filter
- Flexible penalty functions for nonlinear constrained optimization
- Dynamic updates of the barrier parameter in primal-dual methods for nonlinear programming
- Convergence analysis of sparse quasi-Newton updates with positive definite matrix completion for two-dimensional functions
- An adaptive augmented Lagrangian method for large-scale constrained optimization
- An efficient Barzilai-Borwein conjugate gradient method for unconstrained optimization
- Active-set strategy in Powell's method for optimization without derivatives
- A filter-trust-region method for simple-bound constrained optimization
- A filter trust-region algorithm for unconstrained optimization with strong global convergence properties
- Validated Solutions of Saddle Point Linear Systems
- Convexity and concavity detection in computational graphs: tree walks for convexity assessment
- A strongly sub-feasible primal-dual quasi interior-point algorithm for nonlinear inequality constrained optimization
- A Preconditioning Framework for Sequences of Diagonally Modified Linear Systems Arising in Optimization
- Variable parameter Uzawa method for solving a class of block three-by-three saddle point problems
- A New Dai-Liao Conjugate Gradient Method with Optimal Parameter Choice
- Numerical research on the sensitivity of nonmonotone trust region algorithms to their parameters
- Preconditioning saddle-point systems with applications in optimization
- Dai-Kou type conjugate gradient methods with a line search only using gradient
- Global optimization test problems based on random field composition
- Structured regularization for barrier NLP solvers
- A modified conjugate gradient method for general convex functions
- A Modified PRP Conjugate Gradient Algorithm with Trust Region for Optimization Problems
- A Structured Quasi-Newton Algorithm for Optimizing with Incomplete Hessian Information
- A nonmonotone weighting self-adaptive trust region algorithm for unconstrained nonconvex optimization
- The flattened aggregate constraint homotopy method for nonlinear programming problems with many nonlinear constraints
- Two classes of spectral conjugate gradient methods for unconstrained optimizations
- An improved Polak-Ribière-Polyak conjugate gradient method with an efficient restart direction
- Computationally Efficient Decompositions of Oblique Projection Matrices
- Mesh-based Nelder-Mead algorithm for inequality constrained optimization
- Efficient solution of quadratically constrained quadratic subproblems within the mesh adaptive direct search algorithm
- A modified secant equation quasi-Newton method for unconstrained optimization
- Asynchronous Parallel Generating Set Search for Linearly Constrained Optimization
- A comparison of reduced and unreduced KKT systems arising from interior point methods
- A scaled conjugate gradient method with moving asymptotes for unconstrained optimization problems
- Best practices for comparing optimization algorithms
- A frame-based conjugate gradients direct search method with radial basis function interpolation model
- A new superlinearly convergent algorithm of combining QP subproblem with system of linear equations for nonlinear optimization
- An improved Perry conjugate gradient method with adaptive parameter choice
- A subspace version of the Powell-Yuan trust-region algorithm for equality constrained optimization
- Limited-memory LDL\(^{\top}\) factorization of symmetric quasi-definite matrices with application to constrained optimization
- New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization
- Efficient solution of many instances of a simulation-based optimization problem utilizing a partition of the decision space
- Compact representations of structured BFGS matrices
- Minimizing the Condition Number for Small Rank Modifications
- Several accelerated subspace minimization conjugate gradient methods based on regularization model and convergence rate analysis for nonconvex problems
- A new subspace minimization conjugate gradient method based on modified secant equation for unconstrained optimization
- A feasible filter SQP algorithm with global and local convergence
- A retrospective trust-region method for unconstrained optimization
- An improved Dai-Kou conjugate gradient algorithm for unconstrained optimization
- A subspace minimization conjugate gradient method based on conic model for unconstrained optimization
- A class of derivative-free nonmonotone optimization algorithms employing coordinate rotations and gradient approximations
- Corrigendum to: ``Krasnosel'skii type hybrid fixed point theorems and their applications to fractional integral equations
- Using improved directions of negative curvature for the solution of bound-constrained nonconvex problems
- A nonmonotone inexact Newton method for unconstrained optimization
- A regularized limited memory BFGS method for large-scale unconstrained optimization and its efficient implementations
- A \(q\)-Polak-Ribière-Polyak conjugate gradient algorithm for unconstrained optimization problems
This page was built for software: SifDec