Testing Unconstrained Optimization Software
From MaRDI portal
Publication:3902415
DOI10.1145/355934.355936zbMath0454.65049OpenAlexW2056603712MaRDI QIDQ3902415
Jorge J. Moré, Kenneth E. Hillstrom, B. S. Garbow
Publication date: 1981
Published in: ACM Transactions on Mathematical Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1145/355934.355936
Analysis of algorithms and problem complexity (68Q25) Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical computation of solutions to systems of equations (65H10)
Related Items
Quasi-Newton methods based on ordinary differential equation approach for unconstrained nonlinear optimization, A nonmonotone trust region method based on simple conic models for unconstrained optimization, A new direct search method based on separable fractional interpolation model, A class of parameter-free filled functions for box-constrained system of nonlinear equations, Douglas-Rachford splitting method for semidefinite programming, An inexact restoration approach to optimization problems with multiobjective constraints under weighted-sum scalarization, A direct search method for unconstrained quantile-based simulation optimization, Global optimization using \(q\)-gradients, A new family of globally convergent conjugate gradient methods, Test problem generator for unconstrained global optimization, Approximate Gauss-Newton methods for solving underdetermined nonlinear least squares problems, Approximating the objective function's gradient using perceptrons for constrained minimization with application in drag reduction, A sufficient descent Dai-Yuan type nonlinear conjugate gradient method for unconstrained optimization problems, On the performance of a new symmetric rank-one method with restart for solving unconstrained optimization problems, Modified nonlinear conjugate gradient method with sufficient descent condition for unconstrained optimization, A Shamanskii-like Levenberg-Marquardt method for nonlinear equations, Use of the minimum norm search direction in a nonmonotone version of the Gauss-Newton method, Parallel Uzawa method for large-scale minimization of partially separable functions, A nonmonotone trust region method with new inexact line search for unconstrained optimization, Scaled diagonal gradient-type method with extra update for large-scale unconstrained optimization, Nonlinear conjugate gradient methods with Wolfe type line search, A new two-step gradient-type method for large-scale unconstrained optimization, CARTopt: a random search method for nonsmooth unconstrained optimization, Multiagent cooperation for solving global optimization problems: an extendible framework with example cooperation strategies, A trust-region-based BFGS method with line search technique for symmetric nonlinear equations, The convergence of conjugate gradient method with nonmonotone line search, New cautious BFGS algorithm based on modified Armijo-type line search, A nonmonotone trust region method with adaptive radius for unconstrained optimization problems, Nonsmooth exclusion test for finding all solutions of nonlinear equations, Local convergence of a secant type method for solving least squares problems, A nonmonotone globalization algorithm with preconditioned gradient path for unconstrained optimization, A nonmonotone adaptive trust region method for unconstrained optimization based on conic model, Nonmonotone trust region algorithm for unconstrained optimization problems, A model-hybrid approach for unconstrained optimization problems, On per-iteration complexity of high order Chebyshev methods for sparse functions with banded Hessians, Nonmonotone adaptive trust region method, Nonmonotone second-order Wolfe's line search method for unconstrained optimization problems, Global convergence of a nonlinear conjugate gradient method, Global convergence of a modified spectral conjugate gradient method, An efficient descent direction method with cutting planes, An efficient nonmonotone trust-region method for unconstrained optimization, An improved trust region algorithm for nonlinear equations, A variant spectral-type FR conjugate gradient method and its global convergence, A hybrid shuffled complex evolution approach based on differential evolution for unconstrained optimization, Combining nonmonotone conic trust region and line search techniques for unconstrained optimization, A restarting approach for the symmetric rank one update for unconstrained optimization, Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions, A secant method for nonlinear least-squares minimization, Implementing the Nelder-Mead simplex algorithm with adaptive parameters, Globally convergent three-term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization, A smoothing Newton method with Fischer-Burmeister function for second-order cone complementarity problems, An improved multi-step gradient-type method for large scale optimization, A mixed spectral CD-DY conjugate gradient method, A BFGS trust-region method for nonlinear equations, Limited memory BFGS method with backtracking for symmetric nonlinear equations, A combined class of self-scaling and modified quasi-Newton methods, A symmetric rank-one method based on extra updating techniques for unconstrained optimization, A derivative free iterative method for solving least squares problems, Two minimal positive bases based direct search conjugate gradient methods for computationally expensive functions, Constructing composite search directions with parameters in quadratic interpolation models, A higher-order Levenberg-Marquardt method for nonlinear equations, Self-adaptive randomized and rank-based differential evolution for multimodal problems, Analysis of sparse quasi-Newton updates with positive definite matrix completion, Modifications of Newton's method to extend the convergence domain, Cross-Hill: a heuristic method for global optimization, A quasi-Newton trust region method based on a new fractional model, Continuous global optimization through the generation of parametric curves, A nonmonotone line search method for noisy minimization, A trust-region approach with novel filter adaptive radius for system of nonlinear equations, A class of diagonal quasi-Newton methods for large-scale convex minimization, Bounds tightening based on optimality conditions for nonconvex box-constrained optimization, A quasi-Newton algorithm for large-scale nonlinear equations, A nonmonotone Levenberg-Marquardt method for nonlinear complementarity problems under local error bound, On the multi-point Levenberg-Marquardt method for singular nonlinear equations, A new trust-region method for solving systems of equalities and inequalities, Analysis of a self-scaling quasi-Newton method, A hybrid trust region algorithm for unconstrained optimization, A hybrid ODE-based method for unconstrained optimization problems, A first-order interior-point method for linearly constrained smooth optimization, Modified nonmonotone Armijo line search for descent method, Global convergence of a two-parameter family of conjugate gradient methods without line search, The extrapolated interval global optimization algorithm, A convergent variant of the Nelder--Mead algorithm, An accurate active set Newton algorithm for large scale bound constrained optimization., A critical review of discrete filled function methods in solving nonlinear discrete optimization problems, A modified Brown algorithm for solving singular nonlinear systems with rank defects, CANM, a program for numerical solution of a system of nonlinear equations using the continuous analog of Newton's method, A Metropolis algorithm combined with Hooke-Jeeves local search method applied to global optimization, Convergence analysis of a modified BFGS method on convex minimizations, The method of successive orthogonal projections for solving nonlinear simultaneous equations, Scaled memoryless symmetric rank one method for large-scale optimization, A conjugate gradient method with descent direction for unconstrained optimization, minpack, A restricted trust region algorithm for unconstrained optimization, A new type of quasi-Newton updating formulas based on the new quasi-Newton equation, Benchmarking results for the Newton-Anderson method, A quasi-Newton method with modification of one column per iteration, A parallel unconstrained quasi-Newton algorithm and its performance on a local memory parallel computer, A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations, A trust-region method with improved adaptive radius for systems of nonlinear equations, An adaptive trust-region method without function evaluations, A method of trust region type for minimizing noisy functions, Superlinearly convergent trust-region method without the assumption of positive-definite Hessian, Novel algorithms for noisy minimization problems with applications to neural networks training, A multi-iterate method to solve systems of nonlinear equations, A cubic regularization of Newton's method with finite difference Hessian approximations, A modified Levenberg-Marquardt method with line search for nonlinear equations, A generalized multivariable Newton method, A study of Liu-Storey conjugate gradient methods for vector optimization, A conjugate gradient type method for the nonnegative constraints optimization problems, On \(q\)-BFGS algorithm for unconstrained optimization problems, A quasi-Newton method with Wolfe line searches for multiobjective optimization, A new family of hybrid three-term conjugate gradient methods with applications in image restoration, The global convergence of the BFGS method with a modified WWP line search for nonconvex functions, Two new conjugate gradient methods for unconstrained optimization, On the global convergence of an inexact quasi-Newton conditional gradient method for constrained nonlinear systems, On the use of third-order models with fourth-order regularization for unconstrained optimization, Convergence and complexity analysis of a Levenberg-Marquardt algorithm for inverse problems, On the extension of the Hager-Zhang conjugate gradient method for vector optimization, A note on solving nonlinear optimization problems in variable precision, A regularization method for constrained nonlinear least squares, A Shamanskii-like self-adaptive Levenberg-Marquardt method for nonlinear equations, Shamanskii-like Levenberg-Marquardt method with a new line search for systems of nonlinear equations, A class of gradient unconstrained minimization algorithms with adaptive stepsize, On constrained optimization with nonconvex regularization, Accelerated diagonal gradient-type method for large-scale unconstrained optimization, PRP-like algorithm for monotone operator equations, Combining cross-entropy and MADS methods for inequality constrained global optimization, Using gradient directions to get global convergence of Newton-type methods, An improved Polak-Ribière-Polyak conjugate gradient method with an efficient restart direction, Numerical study of a smoothing algorithm for the complementarity system over the second-order cone, Damped techniques for the limited memory BFGS method for large-scale optimization, A global convergence of LS-CD hybrid conjugate gradient method, A modified ODE-based algorithm for unconstrained optimization problems, A regularized limited memory BFGS method for nonconvex unconstrained minimization, A practical PR+ conjugate gradient method only using gradient, A new nonmonotone line-search trust-region approach for nonlinear systems, A modified nonlinear conjugate gradient method with the Armijo line search and its application, Global convergence of a modified two-parameter scaled BFGS method with Yuan-Wei-Lu line search for unconstrained optimization, Modified inexact Levenberg-Marquardt methods for solving nonlinear least squares problems, A new nonlinear conjugate gradient method with guaranteed global convergence, A Newton-like trust region method for large-scale unconstrained nonconvex minimization, A new trust region method for solving least-square transformation of system of equalities and inequalities, \textsc{Oscars}-II: an algorithm for bound constrained global optimization, Nonmonotone line searches for unconstrained multiobjective optimization problems, A class on nonmonotone stabilization methods in unconstrained optimization, Accelerating convergence of the globalized Newton method to critical solutions of nonlinear equations, A generalized worst-case complexity analysis for non-monotone line searches, Conditional gradient method for multiobjective optimization, A structured quasi-Newton algorithm with nonmonotone search strategy for structured NLS problems and its application in robotic motion control, On efficiency of nonmonotone Armijo-type line searches, Limited memory technique using trust regions for nonlinear equations, A modified hybrid conjugate gradient method for unconstrained optimization, Global convergence of BFGS and PRP methods under a modified weak Wolfe-Powell line search, An efficient line search trust-region for systems of nonlinear equations, A new CG algorithm based on a scaled memoryless BFGS update with adaptive search strategy, and its application to large-scale unconstrained optimization problems, A derivative-free Gauss-Newton method, A modified Broyden-like quasi-Newton method for nonlinear equations, FR-type algorithm for finding approximate solutions to nonlinear monotone operator equations, Nonlinear Kaczmarz algorithms and their convergence, Secant update version of quasi-Newton PSB with weighted multisecant equations, Generalized continuation Newton methods and the trust-region updating strategy for the underdetermined system, Worst-case evaluation complexity of derivative-free nonmonotone line search methods for solving nonlinear systems of equations, Two nonmonotone trust region algorithms based on an improved Newton method, A two-step improved Newton method to solve convex unconstrained optimization problems, Local convergence of the Levenberg-Marquardt method under Hölder metric subregularity, Gauss-Newton methods with approximate projections for solving constrained nonlinear least squares problems, Adaptive scaling damped BFGS method without gradient Lipschitz continuity, On the bang-bang control approach via a component-wise line search strategy for unconstrained optimization, A tensor trust-region model for nonlinear system, A modified Nelder-Mead barrier method for constrained optimization, The Nelder-Mead simplex algorithm with perturbed centroid for high-dimensional function optimization, An efficient conjugate gradient trust-region approach for systems of nonlinear equation, Continuation Newton methods with the residual trust-region time-stepping scheme for nonlinear equations, Triangular decomposition of CP factors of a third-order tensor with application to solving nonlinear systems of equations, A simple approximated solution method for solving fractional trust region subproblems of nonlinearly equality constrained optimization, Least-squares-based three-term conjugate gradient methods, On large-scale unconstrained optimization and arbitrary regularization, Two improved nonlinear conjugate gradient methods with the strong Wolfe line search, Levenberg-Marquardt method based on probabilistic Jacobian models for nonlinear equations, Globally convergent Newton-type methods for multiobjective optimization, Convergence rate of the modified Levenberg-Marquardt method under Hölderian local error bound, An efficient two-step trust-region algorithm for exactly determined consistent systems of nonlinear equations, A genetic algorithm with a self-reproduction operator to solve systems of nonlinear equations, A novel self-adaptive trust region algorithm for unconstrained optimization, A hybrid conjugate gradient method with descent property for unconstrained optimization, On the global convergence of a parameter-adjusting Levenberg-Marquardt method, A nonmonotone hybrid conjugate gradient method for unconstrained optimization, Derivative-free robust optimization for circuit design, Improved convergence results of an efficient Levenberg-Marquardt method for nonlinear equations, Two classes of spectral conjugate gradient methods for unconstrained optimizations, Two modified conjugate gradient methods for unconstrained optimization with applications in image restoration problems, An efficient conjugate gradient-based algorithm for unconstrained optimization and its projection extension to large-scale constrained nonlinear equations with applications in signal recovery and image denoising problems, A three-terms Polak-Ribière-Polyak conjugate gradient algorithm for large-scale nonlinear equations, Two modified nonlinear conjugate gradient methods with disturbance factors for unconstrained optimization, Structured spectral algorithm with a nonmonotone line search for nonlinear least squares, A relaxed nonmonotone adaptive trust region method for solving unconstrained optimization problems, A modified two steps Levenberg-Marquardt method for nonlinear equations, A wedge trust region method with self-correcting geometry for derivative-free optimization, Structured diagonal Gauss-Newton method for nonlinear least squares, New implicit updates in multi-step quasi-Newton methods for unconstrained optimisation, From linear to nonlinear iterative methods, Mesh-based Nelder-Mead algorithm for inequality constrained optimization, Inexact trust region method for large sparse systems of nonlinear equations, Multi-step quasi-Newton methods for optimization, Computational experience with known variable metric updates, A class of nonmonotone stabilization trust region methods, The least prior deviation quasi-Newton update, Family of optimally conditioned quasi-Newton updates for unconstrained optimization, A quasi-Newton method using a nonquadratic model, A hybrid of adjustable trust-region and nonmonotone algorithms for unconstrained optimization, A new adaptive trust-region method for system of nonlinear equations, Performance comparison of memetic algorithms, Quasi-Newton method by Hermite interpolation, Interpolation by conic model for unconstrained optimization, A trust-region strategy for minimization on arbitrary domains, A new adaptive trust region algorithm for optimization problems, Convergence and numerical results for a parallel asynchronous quasi- Newton method, The convergence of quasi-Gauss-Newton methods for nonlinear problems, Efficent line search algorithm for unconstrained optimization, Best practices for comparing optimization algorithms, A corrected Levenberg-Marquardt algorithm with a nonmonotone line search for the system of nonlinear equations, A modified Wei-Yao-Liu conjugate gradient method for unconstrained optimization, On the efficiency of gradient based optimization algorithms for DNS-based optimal control in a turbulent channel flow, Inexact Newton methods for solving nonsmooth equations, A new descent algorithm using the three-step discretization method for solving unconstrained optimization problems, The convergence speed of interval methods for global optimization, An inexact Newton-like conditional gradient method for constrained nonlinear systems, Modifying the BFGS method, On the convergence of \(s\)-dependent GFR conjugate gradient method for unconstrained optimization, A globally convergent method for nonlinear least-squares problems based on the Gauss-Newton model with spectral correction, Efficient solution of many instances of a simulation-based optimization problem utilizing a partition of the decision space, How difficult is nonlinear optimization? A practical solver tuning approach, with illustrative results, Nonmonotone Levenberg-Marquardt algorithms and their convergence analysis, Alternating multi-step quasi-Newton methods for unconstrained optimization, A Barzilai and Borwein scaling conjugate gradient method for unconstrained optimization problems, A class of one parameter conjugate gradient methods, Quasi-Newton methods with derivatives, A new conjugate gradient algorithm with sufficient descent property for unconstrained optimization, Numerical experiments with the Lancelot package (Release \(A\)) for large-scale nonlinear optimization, Quasi-Newton ABS methods for solving nonlinear algebraic systems of equations, Using function-values in multi-step quasi-Newton methods, A dimension-reducing method for unconstrained optimization, A switching-method for nonlinear systems, Parallel algorithm for unconstrained optimization based on decomposition techniques, A new class of nonmonotone adaptive trust-region methods for nonlinear equations with box constraints, OPTAC: A portable software package for analyzing and comparing optimization methods by visualization, An extended nonmonotone line search technique for large-scale unconstrained optimization, A new variant of the memory gradient method for unconstrained optimization, New stochastic approximation algorithms with adaptive step sizes, Stochastic Nelder-Mead simplex method -- a new globally convergent direct search method for simulation optimization, \textit{Helios}: A modeling language for global optimization and its implementation in \textit{Newton}, A hybrid nonlinear conjugate gradient method, Exact two steps SOCP/SDP formulation for a modified conic trust region subproblem, Constrained dogleg methods for nonlinear systems with simple bounds, A new modified nonmonotone adaptive trust region method for unconstrained optimization, On the construction of quadratic models for derivative-free trust-region algorithms, Adaptive stochastic approximation algorithm, Monotone and nonmonotone trust-region-based algorithms for large scale unconstrained optimization problems, On the use of the energy norm in trust-region and adaptive cubic regularization subproblems, On the worst-case evaluation complexity of non-monotone line search algorithms, A new hybrid stochastic approximation algorithm, Performance of several nonlinear programming software packages on microcomputers., Inverse \(q\)-columns updating methods for solving nonlinear systems of equations, Benchmarking nonlinear optimization software in technical computing environments, Enclosing ellipsoids and elliptic cylinders of semialgebraic sets and their application to error bounds in polynomial optimization, Global convergence of a modified LS method, The Chebyshev-Shamanskii method for solving systems of nonlinear equations, A structured diagonal Hessian approximation method with evaluation complexity analysis for nonlinear least squares, Improved optimization methods for image registration problems, Two modified three-term type conjugate gradient methods and their global convergence for unconstrained optimization, Space-decomposition minimization method for large-scale minimization problems, A simple alternating direction method for the conic trust region subproblem, Global convergence of Schubert's method for solving sparse nonlinear equations, A new modified three-term Hestenes-Stiefel conjugate gradient method with sufficient descent property and its global convergence, A nonmonotone weighting self-adaptive trust region algorithm for unconstrained nonconvex optimization, A hybrid of DL and WYL nonlinear conjugate gradient methods, The hybrid BFGS-CG method in solving unconstrained optimization problems, A limited memory BFGS method for solving large-scale symmetric nonlinear equations, A fractional trust region method for linear equality constrained optimization, A high-order modified Levenberg-Marquardt method for systems of nonlinear equations with fourth-order convergence, An adaptive multi-step Levenberg-Marquardt method, On a new updating rule of the Levenberg-Marquardt parameter, A new supermemory gradient method for unconstrained optimization problems, Accumulative approach in multistep diagonal gradient-type method for large-scale unconstrained optimization, On an iterative algorithm of order 1.839\(\dots\) for solving the nonlinear least squares problems, An alternating variable method with varying replications for simulation response optimization, On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption, Nonmonotone adaptive trust-region method for unconstrained optimization problems, A gradient-based continuous method for large-scale optimization problems, An inexact Newton method derived from efficiency analysis, An algorithm for solving sparse nonlinear least squares problems, A quasi-discrete Newton algorithm with a nonmonotone stabilization technique, A nonmonotone adaptive trust region method and its convergence, Experiments with new stochastic global optimization search techniques, Variable metric methods for unconstrained optimization and nonlinear least squares, Comparison of partition evaluation measures in an adaptive partitioning algorithm for global optimization, Sizing the BFGS and DFP updates: Numerical study, Preconditioned low-order Newton methods, Simulation response optimization via direct conjugate direction method, A new trust region method with adaptive radius for unconstrained optimization, Sufficient descent conjugate gradient methods for large-scale optimization problems, Design and Analysis of Optimization Algorithms Using Computational Statistics, On the Halley class of methods for unconstrainedoptimization problems, A trust region spectral method for large-scale systems of nonlinear equations, A class of nonmonotone Armijo-type line search method for unconstrained optimization, A new cubic convergent method for solving a system of nonlinear equations, A cover partitioning method for bound constrained global optimization, Dealing with singularities in nonlinear unconstrained optimization, An autoadaptative limited memory Broyden's method to solve systems of nonlinear equations, A quasi-Newton trust region method with a new conic model for the unconstrained optimization, A new trust region filter algorithm, A new version of the Liu-Storey conjugate gradient method, A gradient method for unconstrained optimization in noisy environment, Numerical experience with multiple update quasi-Newton methods for unconstrained optimization, Numerical research on the sensitivity of nonmonotone trust region algorithms to their parameters, A CARTOPT METHOD FOR BOUND-CONSTRAINED GLOBAL OPTIMIZATION, Global convergence and the Powell singular function, A new nonmonotone trust region method for unconstrained optimization equipped by an efficient adaptive radius, Coordinate search algorithms in multilevel optimization, Projected gradient algorithms for optimization over order simplices, A new nonmonotone trust-region method of conic model for solving unconstrained optimization, A frame-based conjugate gradients direct search method with radial basis function interpolation model, Solving polynomial least squares problems via semidefinite programming relaxations, A modified nonmonotone BFGS algorithm for unconstrained optimization, Steepest descent preconditioning for nonlinear GMRES optimization, SIMULATION-BASED OPTIMIZATION BY NEW STOCHASTIC APPROXIMATION ALGORITHM, Complex-step derivative approximation in noisy environment, A subspace version of the Wang-Yuan augmented Lagrangian-trust region method for equality constrained optimization, A nonmonotone ODE-based method for unconstrained optimization, A modified quasi-Newton method for nonlinear equations, An engineering interpretation of Nesterov's convex minimization algorithm and time integration: application to optimal fiber orientation, An efficient projection-based algorithm without Lipschitz continuity for large-scale nonlinear pseudo-monotone equations, On the modified trust region algorithm for nonlinear equations, A modified conjugate gradient method based on the self-scaling memoryless BFGS update, Structured two-point stepsize gradient methods for nonlinear least squares, A new modified three-term conjugate gradient method with sufficient descent property and its global convergence, Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search, The impact of noise on evaluation complexity: the deterministic trust-region case, A modified inertial three-term conjugate gradient projection method for constrained nonlinear equations with applications in compressed sensing, A modified secant equation quasi-Newton method for unconstrained optimization, A harmonic framework for stepsize selection in gradient methods, Using QR decomposition to obtain a new instance of mesh adaptive direct search with uniformly distributed polling directions, Computational experiments with scaled initial hessian approximation for the broyden family methods∗, One side cut accelerated random search, The Gauss-Newton Methods via Conjugate Gradient Path without Line Search Technique for Solving Nonlinear Systems, A Levenberg-Marquardt algorithm with correction for singular system of nonlinear equations, An inexact line search approach using modified nonmonotone strategy for unconstrained optimization, An improved inexact Newton method, A generating set search method using curvature information, A new class of test functions for global optimization, On the use of simplex methods in constructing quadratic models, An unconstrained optimization method using nonmonotone second order Goldstein's line search, The strain hardening rotating hollow shaft subject to a positive temperature gradient, Memory gradient method with Goldstein line search, A Modified PRP Conjugate Gradient Algorithm with Trust Region for Optimization Problems, An ODE-Based Trust Region Filter Algorithm for Unconstrained Optimization, A nonmonotone trust region method based on nonincreasing technique of weighted average of the successive function values, On benchmarking functions for genetic algorithms, A review of linear and nonlinear Cauchy singular integral and integro-differential equations arising in mechanics, A local linear embedding module for evolutionary computation optimization, A Levenberg-Marquardt iterative solver for least-squares problems, A new trust region method for unconstrained optimization, Incorporating nonmonotone strategies into the trust region method for unconstrained optimization, Discrete filled function method for discrete global optimization, A curvilinear search algorithm for unconstrained optimization by automatic differentiation, A differential-equations algorithm for nonlinear equations, A new filled function method for nonlinear integer programming problem, A new linesearch algorithm for nonlinear least squares problems, Minimization Algorithms for Functions with Random Noise, New quasi-Newton methods for unconstrained optimization problems, Convergence of nonmonotone line search method, Global optimization using a dynamical systems approach, Dogleg paths and trust region methods with back tracking technique for unconstrained optimization, Multi-directional parallel algorithms for unconstrained optimization, Global convergence of the Polak-Ribière-Polyak conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems, A primal-dual interior-point algorithm for quadratic programming, A modified Hooke and Jeeves algorithm with likelihood ratio performance extrapolation for simulation optimization, Convergence properties of a self-adaptive Levenberg-Marquardt algorithm under local error bound condition, The substitution secant/finite difference method for large scale sparse unconstrained optimization, New line search methods for unconstrained optimization, An ODE-based nonmonotone method for unconstrained optimization problems, Exploiting Sparsity in SDP Relaxation of Polynomial Optimization Problems, A variant of trust-region methods for unconstrained optimization, Solving unconstrained optimization problem with a filter-based nonmonotone pattern search algorithm, Nonlinear analysis: optimization methods, convergence theory, and applications, Corrigendum to: ``Krasnosel'skii type hybrid fixed point theorems and their applications to fractional integral equations, On a Three-Step Method with the Order of Convergence 1 + 2 $$ \sqrt{2} $$ for the Solution of Systems of Nonlinear Operator Equations, A sufficient descent conjugate gradient method and its global convergence, A new trust-region method with line search for solving symmetric nonlinear equations, Structured symmetric rank-one method for unconstrained optimization, COMBINATION ADAPTIVE TRUST REGION METHOD BY NON-MONOTONE STRATEGY FOR UNCONSTRAINED NONLINEAR PROGRAMMING, Unnamed Item, The convex-decomposable operator equation and its monotonic inclusive iteration, The modified Levenberg-Marquardt method for nonlinear equations with cubic convergence, An improved interval global optimization algorithm using higher-order inclusion function forms, A MODIFIED PROJECTED CONJUGATE GRADIENT ALGORITHM FOR UNCONSTRAINED OPTIMIZATION PROBLEMS, An effective trust-region-based approach for symmetric nonlinear systems, An adaptive approach of conic trust-region method for unconstrained optimization problems, New combinatorial direction stochastic approximation algorithms, Solving systems of nonlinear equations by means of an accelerated successive orthogonal projections method, A second-order pruning step for verified global optimization, A parallel subgradient projections method for the convex feasibility problem, A self-adaptive trust region method with line search based on a simple subproblem model, A new quasi-Newton algorithm, A truncated Newton method with non-monotone line search for unconstrained optimization, On the construction of minimization methods of quasi-Newton type, A combined direction stochastic approximation algorithm, On the local convergence of adjoint Broyden methods, New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems, New inexact line search method for unconstrained optimization, Convergence rate of the trust region method for nonlinear equations under local error bound condition, Grid restrained Nelder-Mead algorithm, A truncated nonmonotone Gauss-Newton method for large-scale nonlinear least-squares problems, A modified quasi-Newton method for structured optimization with partial information on the Hessian, A type of modified BFGS algorithm with any rank defects and the local \(Q\)-superlinear convergence properties, Global convergence of the non-quasi-Newton method for unconstrained optimization problems, Global convergence properties of the two new dependent Fletcher-Reeves conjugate gradient methods, A fractional programming algorithm based on conic quasi-Newton trust region method for unconstrained minimization, Experiments with range computations using extrapolation, Approximate solution of the trust region problem by minimization over two-dimensional subspaces, Partitioning group correction Cholesky techniques for large scale sparse unconstrained optimization, Global convergence of a memory gradient method for unconstrained optimization, A quasi-Newton based pattern search algorithm for unconstrained optimization, A new class of supermemory gradient methods, A compact limited memory method for large scale unconstrained optimization, Parallel quasi-Newton methods for unconstrained optimization, Convergence of the Polak-Ribiére-Polyak conjugate gradient method, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, Successive column correction algorithms for solving sparse nonlinear systems of equations, On the convergence of partitioning group correction algorithms, A hybrid simplex search and particle swarm optimization for unconstrained optimization, Discrete global descent method for discrete global optimization and nonlinear integer programming, Some research on Levenberg-Marquardt method for the nonlinear equations, A descent nonlinear conjugate gradient method for large-scale unconstrained optimization, A nonmonotone adaptive trust region method based on conic model for unconstrained optimization, Equal angle distribution of polling directions in direct-search methods, A system of nonsmooth equations solver based upon subgradient method, An algorithm for solving nonlinear least-squares problems with a new curvilinear search, On the limited memory BFGS method for large scale optimization, A new quasi-Newton pattern search method based on symmetric rank-one update for unconstrained optimization, A derivative-free nonmonotone line-search technique for unconstrained optimization, A nonmonotone trust-region method of conic model for unconstrained optimization, The global and superlinear convergence of a new nonmonotone MBFGS algorithm on convex objective functions, Global convergence of a modified spectral FR conjugate gradient method, Some remarks on conjugate gradient methods without line search, Convergence of PRP method with new nonmonotone line search, Global convergence of a memory gradient method without line search, Nonlinear reduction for solving deficient polynomial systems by continuation methods, Multi-step nonlinear conjugate gradient methods for unconstrained minimization, Global convergence of the nonmonotone MBFGS method for nonconvex unconstrained minimization, Unconstrained derivative-free optimization by successive approximation, A memetic algorithm for the flexible flow line scheduling problem with processor blocking, A rational gradient model for minimization, A coordinate gradient descent method for nonsmooth separable minimization, A quasi-Gauss-Newton method for solving nonlinear algebraic equations, A conjugate gradient method for unconstrained optimization problems, A quasi-Newton method for solving nonlinear algebraic equations, Nonlinear conjugate gradient methods with structured secant condition for nonlinear least squares problems, A new framework for sharp and efficient resolution of NCSP with manifolds of solutions, Two new conjugate gradient methods based on modified secant equations, A heuristic iterated-subspace minimization method with pattern search for unconstrained optimization, Symbolic interval inference approach for subdivision direction selection in interval partitioning algorithms, A nonmonotone conic trust region method based on line search for solving unconstrained optimization, A note on the Levenberg-Marquardt parameter, Symbolic homotopy construction, Recognizing underlying sparsity in optimization, On the use of function-values in unconstrained optimisation, Combining trust-region techniques and Rosenbrock methods to compute stationary points, Numerical expirience with a class of self-scaling quasi-Newton algorithms, Modified partial-update Newton-type algorithms for unary optimization, A generalized conjugate gradient algorithm, Correlative sparsity in primal-dual interior-point methods for LP, SDP, and SOCP, Tight convex underestimators for \({\mathcal{C}^2}\)-continuous problems. II: Multivariate functions, A gentle introduction to Numerica, A modified PRP conjugate gradient method, A new trust region method with adaptive radius, Low order-value optimization and applications, A conic trust-region method and its convergence properties, A new backtracking inexact BFGS method for symmetric nonlinear equations, Tensor methods for large sparse systems of nonlinear equations, A hybrid of the Newton-GMRES and electromagnetic meta-heuristic methods for solving systems of nonlinear equations, A descent algorithm without line search for unconstrained optimization, Globally convergent Polak-Ribière-Polyak conjugate gradient methods under a modified Wolfe line search, A model-trust region algorithm utilizing a quadratic interpolant, Scaled optimal path trust-region algorithm, On the convergence of a process basing on the modified secant method, On the use of curvature estimates in quasi-Newton methods, On the conditioning of the Hessian approximation in quasi-Newton methods, Solution of nonlinear systems of equations by an optimal projection method, Generalized Polak-Ribière algorithm, Variational quasi-Newton methods for unconstrained optimization, A parallel asynchronous Newton algorithm for unconstrained optimization, Successive element correction algorithms for sparse unconstrained optimization, Comparative assessment of algorithms and software for global optimization, A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems, A nonmonotone trust region method for unconstrained optimization, The use of alternation and recurrences in two-step quasi-Newton methods, Three-step fixed-point quasi-Newton methods for unconstrained optimisation, Convergence of descent method with new line search, An improvement of adaptive cubic regularization method for unconstrained optimization problems, Partially symmetrical derivative-free Liu–Storey projection method for convex constrained equations, A non-monotone pattern search approach for systems of nonlinear equations, On high-order model regularization for multiobjective optimization, Globally convergent conjugate gradient algorithms, Model-Based Derivative-Free Methods for Convex-Constrained Optimization, Convergence Analysis of Discrete High-Index Saddle Dynamics, An Experimental Study of Benchmarking Functions for Genetic Algorithms, On Variable-Metric Methods for Sparse Hessians, A Globally Convergent Trust-Region Method for Large-Scale Symmetric Nonlinear Systems, A family of hybrid conjugate gradient methods for unconstrained optimization, On the Barzilai–Borwein gradient methods with structured secant equation for nonlinear least squares problems, Improved Nelder–Mead algorithm in high dimensions with adaptive parameters based on Chebyshev spacing points, Descent direction method with line search for unconstrained optimization in noisy environment, Performance of various BFGS implementations with limited precision second-order information, A dimension-reducing method for solving systems of nonlinear equations in, A Subspace Study on Conjugate Gradient Algorithms, A numerical evaluation of some collinear scaling algorithms for unconstrained, Solving systems of nonlinear equations In using a rotating hyperplane in, The method of successive affine reduction for nonlinear minimization, A new dimension—reducing method for solving systems of nonlinear equations, A new Gauss–Newton-like method for nonlinear equations, Unnamed Item, A Multigrid Approach to SDP Relaxations of Sparse Polynomial Optimization Problems, An Approximation Scheme for Distributionally Robust Nonlinear Optimization, Convergence of memory gradient methods, A DIRECT SEARCH QUASI-NEWTON METHOD FOR NONSMOOTH UNCONSTRAINED OPTIMIZATION, GLOBAL CONVERGENCE OF A SPECIAL CASE OF THE DAI–YUAN FAMILY WITHOUT LINE SEARCH, A trust-region method with a conic model for unconstrained optimization, A new class of quasi-Newton updating formulas, Avoiding Modified Matrix Factorizations in Newton-like Methods, A response surface method-based hybrid optimizer, Unnamed Item, The Mesh Adaptive Direct Search Algorithm for Granular and Discrete Variables, Nonmonotone Self-adaptive Levenberg–Marquardt Approach for Solving Systems of Nonlinear Equations, Accelerating the modified Levenberg-Marquardt method for nonlinear equations, A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling, Tuning Algorithms for Stochastic Black-Box Optimization: State of the Art and Future Perspectives, SDP RELAXATIONS FOR QUADRATIC OPTIMIZATION PROBLEMS DERIVED FROM POLYNOMIAL OPTIMIZATION PROBLEMS, An efficient Levenberg–Marquardt method with a new LM parameter for systems of nonlinear equations, Eigenvalues and switching algorithms for Quasi-Newton updates, CONVERGENCE PROPERTY AND MODIFICATIONS OF A MEMORY GRADIENT METHOD, GLOBAL CONVERGENCE OF SHORTEST-RESIDUAL FAMILY OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH, A family of quasi-Newton methods for unconstrained optimization problems, Constructing test functions for global optimization using continuous formulations of graph problems, New BFGS method for unconstrained optimization problem based on modified Armijo line search, On CG algorithms as objects of scientific study: An appendix, GLOBAL CONVERGENCE OF TWO KINDS OF THREE-TERM CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH, Computational experience with globally convergent descent methods for large sparse systems of nonlinear equations∗, An Initialization Strategy for High-Dimensional Surrogate-Based Expensive Black-Box Optimization, A class of diagonal preconditioners for limited memory BFGS method, Convergence analysis of a block improvement method for polynomial optimization over unit spheres, Predictive Algorithm for Detection of Microcracks from Macroscale Observables, An improved trust region method for unconstrained optimization, A new family of high-order directions for unconstrained optimization inspired by Chebyshev and Shamanskii methods, Solving nonlinear equations with the Newton–Krylov method based on automatic differentiation, On the convergence properties of the unmodified PRP method with a non-descent line search, A new class of memory gradient methods with inexact line searches, Global convergence of the DY conjugate gradient method with Armijo line search for unconstrained optimization problems, Sprouting search—an algorithmic framework for asynchronous parallel unconstrained optimization, The non-monotone conic algorithm, A Simulated Annealing-Based Barzilai–Borwein Gradient Method for Unconstrained Optimization Problems, Hybrid conjugate gradient methods for unconstrained optimization, Unnamed Item, Nonlinear programming on a microcomputer, Two improved classes of Broyden's methods for solving nonlinear systems of equations, A novel global optimization technique for high dimensional functions, Wide interval for efficient self-scaling quasi-Newton algorithms, A globally convergent BFGS method for nonconvex minimization without line searches, A new self-adaptive trust region method for unconstrained optimization, Unnamed Item, Improving the Flexibility and Robustness of Model-based Derivative-free Optimization Solvers, Global convergence of conjugate gradient method, A Modified Trust Region Algorithm, A More Lenient Stopping Rule for Line Search Algorithms, Stochastic Three Points Method for Unconstrained Smooth Minimization, Exploiting problem structure in pattern search methods for unconstrained optimization, Two new Newton-type methods for the nonlinear equations, Scaled memoryless BFGS preconditioned steepest descent method for very large-scale unconstrained optimization, Testing a Class of Methods for Solving Minimization Problems with Simple Bounds on the Variables, Globally convergent algorithms for solving unconstrained optimization problems, An efficient adaptive trust-region method for systems of nonlinear equations, Alternate step gradient method*, Discrete Newton's method with local variations for solving large-scale nonlinear systems, A spectral KRMI conjugate gradient method under the strong-Wolfe line search, Unnamed Item, Unnamed Item, Unnamed Item, High-Index Optimization-Based Shrinking Dimer Method for Finding High-Index Saddle Points, A Collection of Test Multiextremal Optimal Control Problems, A new two-parameter family of nonlinear conjugate gradient methods, A nonlinear conjugate gradient method based on the MBFGS secant condition, Calculus Identities for Generalized Simplex Gradients: Rules and Applications, A new alternating direction trust region method based on conic model for solving unconstrained optimization, COCO: a platform for comparing continuous optimizers in a black-box setting, A nonmonotone hybrid method for nonlinear systems∗, A variant of curved search method, Extra updates for the bfgs method∗, The higher-order Levenberg–Marquardt method with Armijo type line search for nonlinear equations, Correction of trust region method with a new modified Newton method, Two families of self-adjusting spectral hybrid DL conjugate gradient methods and applications in image denoising, A family of hybrid conjugate gradient method with restart procedure for unconstrained optimizations and image restorations, A homogeneous Rayleigh quotient with applications in gradient methods, An accelerated proximal gradient method for multiobjective optimization, Spectral conjugate gradient methods for vector optimization problems, Adaptive nonmonotone line search method for unconstrained optimization, Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search, Simultaneous perturbation stochastic approximation: towards one-measurement per iteration, Two families of hybrid conjugate gradient methods with restart procedures and their applications, Structured adaptive spectral-based algorithms for nonlinear least squares problems with robotic arm modelling applications, A Derivative-Free Nonlinear Least Squares Solver, A three-term conjugate gradient algorithm with restart procedure to solve image restoration problems, On maximum residual nonlinear Kaczmarz-type algorithms for large nonlinear systems of equations, Solving nonlinear equations with a direct Broyden method and its acceleration, A modified Levenberg-Marquardt method for solving system of nonlinear equations, Efficiency of higher-order algorithms for minimizing composite functions, Simultaneous predictive maintenance and inventory policy in a continuously monitoring system using simulation optimization, Quadratic regularization methods with finite-difference gradient approximations, A modified inexact Levenberg-Marquardt method with the descent property for solving nonlinear equations, On pseudoinverse-free block maximum residual nonlinear Kaczmarz method for solving large-scale nonlinear system of equations, The regularization continuation method for optimization problems with nonlinear equality constraints, Unnamed Item, Stable factorized quasi-Newton methods for nonlinear least-squares problems, A new nonmonotone line search technique for unconstrained optimization, On the behaviour of a combined extra-updating/self-scaling BFGS method, Implicit updates in multistep quasi-Newton methods, Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations, Global convergence technique for the Newton method with periodic Hessian evaluation, Minimization algorithms based on supervisor and searcher cooperation, On the nonmonotone line search, Partially updated switching-method for systems of nonlinear equations, On the final steps of Newton and higher order methods, Three-steps modified Levenberg-Marquardt method with a new line search for systems of nonlinear equations, An improved trust region method for unconstrained optimization
Uses Software