Optimization theory and methods. Nonlinear programming
DOI10.1007/B106451zbMATH Open1129.90002OpenAlexW2471988522MaRDI QIDQ2500511FDOQ2500511
Authors: Wenyu Sun, Yaxiang Yuan
Publication date: 17 August 2006
Published in: Springer Optimization and Its Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/b106451
Recommendations
quadratic programmingsequential quadratic programmingnonsmooth optimizationconjugate gradient methodline searchfeasible direction methodspenalty function methods(inexact) Newton method(non-)quasi-Newton methodsnonlinear least-squa\-res problemsself-scaling variable metric methodtheory of constrained optimizationtrust-region and conic model methodstrust-region methods for constrained problems
Quadratic programming (90C20) Methods of successive quadratic programming type (90C55) Nonlinear programming (90C30) Methods of quasi-Newton type (90C53) Methods of reduced gradient type (90C52) Introductory exposition (textbooks, tutorial papers, etc.) pertaining to operations research and mathematical programming (90-01)
Cited In (only showing first 100 items - show all)
- A New First-Order Algorithmic Framework for Optimization Problems with Orthogonality Constraints
- Simplified iteratively regularized Gauss-Newton method in Banach spaces under a general source condition
- Two optimal Hager-Zhang conjugate gradient methods for solving monotone nonlinear equations
- A globally convergent filter-type trust region method for semidefinite programming
- A nonmonotone weighting self-adaptive trust region algorithm for unconstrained nonconvex optimization
- Distributed reconstruction of time-varying graph signals via a modified Newton's method
- A parallel line search subspace correction method for composite convex optimization
- A family of Hager-Zhang conjugate gradient methods for system of monotone nonlinear equations
- A note on a multiplicative parameters gradient method
- A high-order modified Levenberg-Marquardt method for systems of nonlinear equations with fourth-order convergence
- Uniqueness and numerical scheme for the Robin coefficient identification of the time-fractional diffusion equation
- Robust time-domain output error method for identifying continuous-time systems with time delay
- A novel self-adaptive trust region algorithm for unconstrained optimization
- A hybrid scaling parameter for the scaled memoryless BFGS method based on the \(\ell_\infty\) matrix norm
- A derivative-free scaling memoryless DFP method for solving large scale nonlinear monotone equations
- A new derivative-free SCG-type projection method for nonlinear monotone equations with convex constraints
- Global convergence of a modified two-parameter scaled BFGS method with Yuan-Wei-Lu line search for unconstrained optimization
- Two--parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length
- A new hybrid conjugate gradient method for large-scale unconstrained optimization problem with non-convex objective function
- Nonlinear programming. An introduction
- A modified BFGS type quasi-Newton method with line search for symmetric nonlinear equations problems
- A modified descent Polak-Ribiére-Polyak conjugate gradient method with global convergence property for nonconvex functions
- Riemannian conjugate gradient methods with inverse retraction
- A new robust line search technique based on Chebyshev polynomials
- A brief introduction to manifold optimization
- A generalized worst-case complexity analysis for non-monotone line searches
- Gauss-Newton-type methods for bilevel optimization
- Inexact proximal memoryless quasi-Newton methods based on the Broyden family for minimizing composite functions
- Explicit pseudo-transient continuation and the trust-region updating strategy for unconstrained optimization
- A new simple model trust-region method with generalized Barzilai-Borwein parameter for large-scale optimization
- An augmented memoryless BFGS method based on a modified secant equation with application to compressed sensing
- A new CG algorithm based on a scaled memoryless BFGS update with adaptive search strategy, and its application to large-scale unconstrained optimization problems
- Memoryless quasi-Newton methods based on the spectral-scaling Broyden family for Riemannian optimization
- Convergence analysis of simplified iteratively regularized Gauss–Newton method in a Banach space setting
- Two families of scaled three-term conjugate gradient methods with sufficient descent property for nonconvex optimization
- A new family of conjugate gradient methods for unconstrained optimization
- Generalized continuation Newton methods and the trust-region updating strategy for the underdetermined system
- Simultaneous recovery of surface heat flux and thickness of a solid structure by ultrasonic measurements
- Some iterative methods for the largest positive definite solution to a class of nonlinear matrix equation
- A class of derivative-free CG projection methods for nonsmooth equations with an application to the LASSO problem
- A novel diffeomorphic model for image registration and its algorithm
- A QSC method for fractional subdiffusion equations with fractional boundary conditions and its application in parameters identification
- Orthogonal canonical correlation analysis and applications
- A perfect example for the BFGS method
- Cubic interpolation: a line search technique for fuzzy optimization problems
- POD/DEIM reduced-order modeling of time-fractional partial differential equations with applications in parameter identification
- Asymptotic surrogate constraint method and its convergence for a class of semi-infinite programming
- A derivative-free multivariate spectral projection algorithm for constrained nonlinear monotone equations
- A structured diagonal Hessian approximation method with evaluation complexity analysis for nonlinear least squares
- A modified scaled memoryless BFGS preconditioned conjugate gradient algorithm for nonsmooth convex optimization
- Improved optimization methods for image registration problems
- Total variation image restoration method based on subspace optimization
- An accelerated nonmonotone trust region method with adaptive trust region for unconstrained optimization
- A quadratic penalty method for hypergraph matching
- An adaptive scaled BFGS method for unconstrained optimization
- A modified three-term conjugate gradient method with sufficient descent property
- An equivalency condition of nonsingularity in nonlinear semidefinite programming
- Numerical treatment of nonlinear MHD Jeffery-Hamel problems using stochastic algorithms
- A new subspace correction method for nonlinear unconstrained convex optimization problems
- Two extensions of the Dai-Liao method with sufficient descent property based on a penalization scheme
- Multi-modality image registration models and efficient algorithms
- A new supermemory gradient method for unconstrained optimization problems
- Quadratic interpolation technique to minimize univariable fuzzy functions
- Two adaptive Dai-Liao nonlinear conjugate gradient methods
- On \(R\)-linear convergence analysis for a class of gradient methods
- Two optimal Dai-Liao conjugate gradient methods
- Nonmonotone conic trust region method with line search technique for bound constrained optimization
- A dwindling filter line search algorithm for nonlinear equality constrained optimization
- The \(d\)-level nested logit model: assortment and price optimization problems
- Inhomogeneous polynomial optimization over a convex set: an approximation approach
- A modified nonmonotone trust region line search method
- A linear hybridization of the Hestenes-Stiefel method and the memoryless BFGS technique
- Convergence rate of the modified Levenberg-Marquardt method under Hölderian local error bound
- An optimal parameter choice for the Dai-Liao family of conjugate gradient methods by avoiding a direction of the maximum magnification by the search direction matrix
- A novel augmented Lagrangian method of multipliers for optimization with general inequality constraints
- A nonmonotonic trust region algorithm for a class of semi-infinite minimax programming
- Fast finite difference approximation for identifying parameters in a two-dimensional space-fractional nonlocal model with variable diffusivity coefficients
- AN ADAPTIVE CONJUGACY CONDITION AND RELATED NONLINEAR CONJUGATE GRADIENT METHODS
- A nonmonotone globalization algorithm with preconditioned gradient path for unconstrained optimization
- A subspace version of the Powell-Yuan trust-region algorithm for equality constrained optimization
- Two accelerated nonmonotone adaptive trust region line search methods
- A new nonlinear filter constructed from the Newton method and EPR in image restoration
- Nonmonotone second-order Wolfe's line search method for unconstrained optimization problems
- On the sufficient descent property of the Shanno's conjugate gradient method
- On convergence analysis of a derivative-free trust region algorithm for constrained optimization with separable structure
- Two modified three-term conjugate gradient methods with sufficient descent property
- An improved inversion-free method for solving the matrix equation \(X + A^\ast X^{-{\alpha}}A = Q\)
- A modified scaled memoryless symmetric rank-one method
- A new method for parameter estimation of edge-preserving regularization in image restoration
- On the sufficient descent condition of the Hager-Zhang conjugate gradient methods
- Identification of elastic orthotropic material parameters by the singular boundary method
- A preconditioned descent algorithm for variational inequalities of the second kind involving the \(p\)-Laplacian operator
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- An adaptive conjugate gradient algorithm for large-scale unconstrained optimization
- Optimization theory. A concise introduction
- A combined SQP-IPM algorithm for solving large-scale nonlinear optimization problems
- A feasible direction method for the semidefinite program with box constraints
- On solving L-SR1 trust-region subproblems
- Hill-Climbing Algorithm with a Stick for Unconstrained Optimization Problems
- Nonlinear programming techniques for equilibria
Uses Software
This page was built for publication: Optimization theory and methods. Nonlinear programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2500511)