A truncated Newton method with non-monotone line search for unconstrained optimization
From MaRDI portal
Publication:1095799
DOI10.1007/BF00940345zbMath0632.90059MaRDI QIDQ1095799
Luigi Grippo, Francesco Lampariello, Stefano Lucidi
Publication date: 1989
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
unconstrained minimizationlarge scale optimizationill-conditionednon-monotone line searchtruncated Newton algorithm
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Newton-type methods (49M15) Numerical methods based on nonlinear programming (49M37)
Related Items
A nonmonotone inexact Newton method for unconstrained optimization, An unconstrained optimization technique for large-scale linearly constrained convex minimization problems, A class of nonmonotone stabilization trust region methods, Convergence properties of inexact projected gradient methods, Nonmonotone trust region method for solving optimization problems, A line search trust-region algorithm with nonmonotone adaptive radius for a system of nonlinear equations, A hybrid of adjustable trust-region and nonmonotone algorithms for unconstrained optimization, Parallel variable distribution algorithm for constrained optimization with nonmonotone technique, A class of nonmonotone Armijo-type line search method for unconstrained optimization, A non-monotone trust region algorithm for unconstrained optimization with dynamic reference iteration updates using filter, Convergence and numerical results for a parallel asynchronous quasi- Newton method, A truncated Newton algorithm for nonconvex sparse recovery, A truncated Newton method in an augmented Lagrangian framework for nonlinear programming, An active set truncated Newton method for large-scale bound constrained optimization, Nonmonotone trust-region method for nonlinear programming with general constraints and simple bounds, A truncated nonmonotone Gauss-Newton method for large-scale nonlinear least-squares problems, On the convergence rate of scaled gradient projection method, Global convergence of a memory gradient method for unconstrained optimization, A new nonmonotone trust region method for unconstrained optimization equipped by an efficient adaptive radius, A modified nonmonotone BFGS algorithm for unconstrained optimization, An active set Newton-CG method for \(\ell_1\) optimization, A numerical evaluation of some collinear scaling algorithms for unconstrained, Parallel algorithm for unconstrained optimization based on decomposition techniques, Preconditioning Newton-Krylov methods in nonconvex large scale optimization, A nonmonotone line search slackness technique for unconstrained optimization, An extended nonmonotone line search technique for large-scale unconstrained optimization, An adaptive nonmonotone line search technique for solving systems of nonlinear equations, On the convergence of a new hybrid projection algorithm, A nonmonotone trust region method with adaptive radius for unconstrained optimization problems, An active set feasible method for large-scale minimization problems with bound constraints, A non-monotone line search algorithm for unconstrained optimization, A novel hybrid trust region algorithm based on nonmonotone and LOOCV techniques, A new nonmonotone adaptive retrospective trust region method for unconstrained optimization problems, An efficient nonmonotone trust-region method for unconstrained optimization, Conjugate direction methods and polarity for quadratic hypersurfaces, On the nonmonotonicity degree of nonmonotone line searches, A nonmonotone trust-region line search method for large-scale unconstrained optimization, Accelerating the convergence in the single-source and multi-source Weber problems, A Shamanskii-like self-adaptive Levenberg-Marquardt method for nonlinear equations, A Frank-Wolfe based branch-and-bound algorithm for mean-risk optimization, Partial projected Newton method for a class of stochastic linear complementarity problems, An active-set proximal-Newton algorithm for \(\ell_1\) regularized optimization problems with box constraints, The Gauss-Newton Methods via Conjugate Gradient Path without Line Search Technique for Solving Nonlinear Systems, A survey of gradient methods for solving nonlinear optimization, An inexact line search approach using modified nonmonotone strategy for unconstrained optimization, Smoothing Newton algorithm for the second-order cone programming with a nonmonotone line search, An exact penalty-Lagrangian approach for large-scale nonlinear programming, MAKHA -- a new hybrid swarm intelligence global optimization algorithm, Nonmonotone Self-adaptive Levenberg–Marquardt Approach for Solving Systems of Nonlinear Equations, A trust-region approach with novel filter adaptive radius for system of nonlinear equations, An efficient Levenberg–Marquardt method with a new LM parameter for systems of nonlinear equations, Nonmonotone trust region methods with curvilinear path in unconstrained optimization, Multi-step nonlinear conjugate gradient methods for unconstrained minimization, Global convergence of the nonmonotone MBFGS method for nonconvex unconstrained minimization, Numerical study of a smoothing algorithm for the complementarity system over the second-order cone, A practical PR+ conjugate gradient method only using gradient, Nonmonotone projected gradient methods based on barrier and Euclidean distances, Conjugate gradient (CG)-type method for the solution of Newton's equation within optimization frameworks, A derivative-based algorithm for a particular class of mixed variable optimization problems, A filter trust-region algorithm for unconstrained optimization with strong global convergence properties, A nonmonotone PSB algorithm for solving unconstrained optimization, Modified nonmonotone Armijo line search for descent method, Global convergence of nonmonotone descent methods for unconstrained optimization problems, A reduced-space line-search method for unconstrained optimization via random descent directions, A kind of nonmonotone filter method for nonlinear complementarity problem, A curvilinear search algorithm for unconstrained optimization by automatic differentiation, Convergence property of gradient-type methods with non-monotone line search in the presence of perturbations, Convergence properties of nonmonotone spectral projected gradient methods, Convergence of nonmonotone line search method, A new trust region method for solving least-square transformation of system of equalities and inequalities, Nonmonotone adaptive trust-region method for unconstrained optimization problems, Issues on the use of a modified bunch and Kaufman decomposition for large scale Newton's equation, A class on nonmonotone stabilization methods in unconstrained optimization, A nonmonotone trust region method for unconstrained optimization problems on Riemannian manifolds, Planar methods and grossone for the conjugate gradient breakdown in nonlinear programming, On efficiency of nonmonotone Armijo-type line searches, A truncated conjugate gradient method with an inexact Gauss-Newton technique for solving nonlinear systems, A nonmonotone truncated Newton-Krylov method exploiting negative curvature directions, for large scale unconstrained optimization, A new nonmonotone filter Barzilai–Borwein method for solving unconstrained optimization problems, Global convergence of conjugate gradient method, A new family of conjugate gradient methods, A nonmonotone smoothing Newton algorithm for weighted complementarity problem, A conjugate direction based simplicial decomposition framework for solving a specific class of dense convex quadratic programs, A Feasible Active Set Method with Reoptimization for Convex Quadratic Mixed-Integer Programming, A globally convergent BFGS method with nonmonotone line search for non-convex minimization, A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques, A modified SQP method with nonmonotone technique and its global convergence, COMBINATION ADAPTIVE TRUST REGION METHOD BY NON-MONOTONE STRATEGY FOR UNCONSTRAINED NONLINEAR PROGRAMMING, A nonmonotone adaptive trust region method and its convergence, A class of nonmonotone trust region algorithms for unconstrained optimization problems, A nonmonotone smoothing Newton method for system of nonlinear inequalities based on a new smoothing function, Planar conjugate gradient algorithm for large-scale unconstrained optimization. II: Application, A survey of truncated-Newton methods, A framework of conjugate direction methods for symmetric linear systems in optimization, A variant of curved search method, A relaxed nonmonotone adaptive trust region method for solving unconstrained optimization problems, Nonmonotonic trust region algorithm, A parallel asynchronous Newton algorithm for unconstrained optimization, Multi-phase algorithm design for accurate and efficient model fitting, Convergence of descent method with new line search, An improvement of adaptive cubic regularization method for unconstrained optimization problems, A non-monotone pattern search approach for systems of nonlinear equations, Nonmonotone quasi-Newton-based conjugate gradient methods with application to signal processing, Convergence of a Class of Nonmonotone Descent Methods for Kurdyka–Łojasiewicz Optimization Problems, An Improvement of the Pivoting Strategy in the Bunch and Kaufman Decomposition, Within Truncated Newton Methods, Unnamed Item, CONVERGENCE PROPERTY AND MODIFICATIONS OF A MEMORY GRADIENT METHOD, A new nonmonotone line search technique for unconstrained optimization, A new nonmonotone line search technique for unconstrained optimization, Cost approximation algorithms with nonmonotone line searches for a general class of nonlinear programs, Unnamed Item, On the nonmonotone line search, On the final steps of Newton and higher order methods, An efficient adaptive trust-region method for systems of nonlinear equations, A Perry-type derivative-free algorithm for solving nonlinear system of equations and minimizing ℓ1regularized problem, A BFGS trust-region method with a new nonmonotone technique for nonlinear equations, Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization, AN IMPROVED ADAPTIVE TRUST-REGION METHOD FOR UNCONSTRAINED OPTIMIZATION, A superlinearly convergent nonmonotone quasi-Newton method for unconstrained multiobjective optimization, Unnamed Item
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A restricted trust region algorithm for unconstrained optimization
- Memory gradient method for the minimization of functions
- Minimization of functions having Lipschitz continuous first partial derivatives
- Truncated-Newton algorithms for large-scale unconstrained optimization
- The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
- A Family of Trust-Region-Based Algorithms for Unconstrained Minimization with Strong Global Convergence Properties
- Testing Unconstrained Optimization Software
- The watchdog technique for forcing convergence in algorithms for constrained optimization
- Inexact Newton Methods
- Newton-type methods for unconstrained and linearly constrained optimization
- A Nonmonotone Line Search Technique for Newton’s Method