Optimization theory and methods. Nonlinear programming
DOI10.1007/B106451zbMATH Open1129.90002OpenAlexW2471988522MaRDI QIDQ2500511FDOQ2500511
Authors: Wenyu Sun, Yaxiang Yuan
Publication date: 17 August 2006
Published in: Springer Optimization and Its Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/b106451
Recommendations
quadratic programmingsequential quadratic programmingnonsmooth optimizationconjugate gradient methodline searchfeasible direction methodspenalty function methods(inexact) Newton method(non-)quasi-Newton methodsnonlinear least-squa\-res problemsself-scaling variable metric methodtheory of constrained optimizationtrust-region and conic model methodstrust-region methods for constrained problems
Quadratic programming (90C20) Methods of successive quadratic programming type (90C55) Nonlinear programming (90C30) Methods of quasi-Newton type (90C53) Methods of reduced gradient type (90C52) Introductory exposition (textbooks, tutorial papers, etc.) pertaining to operations research and mathematical programming (90-01)
Cited In (only showing first 100 items - show all)
- An accelerated nonmonotone trust region method with adaptive trust region for unconstrained optimization
- A quadratic penalty method for hypergraph matching
- An adaptive scaled BFGS method for unconstrained optimization
- A modified three-term conjugate gradient method with sufficient descent property
- An equivalency condition of nonsingularity in nonlinear semidefinite programming
- Numerical treatment of nonlinear MHD Jeffery-Hamel problems using stochastic algorithms
- A new subspace correction method for nonlinear unconstrained convex optimization problems
- Two extensions of the Dai-Liao method with sufficient descent property based on a penalization scheme
- Multi-modality image registration models and efficient algorithms
- A new supermemory gradient method for unconstrained optimization problems
- Quadratic interpolation technique to minimize univariable fuzzy functions
- Two adaptive Dai-Liao nonlinear conjugate gradient methods
- On \(R\)-linear convergence analysis for a class of gradient methods
- Two optimal Dai-Liao conjugate gradient methods
- Nonmonotone conic trust region method with line search technique for bound constrained optimization
- A dwindling filter line search algorithm for nonlinear equality constrained optimization
- The \(d\)-level nested logit model: assortment and price optimization problems
- Inhomogeneous polynomial optimization over a convex set: an approximation approach
- A modified nonmonotone trust region line search method
- A linear hybridization of the Hestenes-Stiefel method and the memoryless BFGS technique
- Convergence rate of the modified Levenberg-Marquardt method under Hölderian local error bound
- An optimal parameter choice for the Dai-Liao family of conjugate gradient methods by avoiding a direction of the maximum magnification by the search direction matrix
- A novel augmented Lagrangian method of multipliers for optimization with general inequality constraints
- A nonmonotonic trust region algorithm for a class of semi-infinite minimax programming
- Fast finite difference approximation for identifying parameters in a two-dimensional space-fractional nonlocal model with variable diffusivity coefficients
- AN ADAPTIVE CONJUGACY CONDITION AND RELATED NONLINEAR CONJUGATE GRADIENT METHODS
- A nonmonotone globalization algorithm with preconditioned gradient path for unconstrained optimization
- A subspace version of the Powell-Yuan trust-region algorithm for equality constrained optimization
- Two accelerated nonmonotone adaptive trust region line search methods
- A new nonlinear filter constructed from the Newton method and EPR in image restoration
- Nonmonotone second-order Wolfe's line search method for unconstrained optimization problems
- On the sufficient descent property of the Shanno's conjugate gradient method
- On convergence analysis of a derivative-free trust region algorithm for constrained optimization with separable structure
- Two modified three-term conjugate gradient methods with sufficient descent property
- An improved inversion-free method for solving the matrix equation \(X + A^\ast X^{-{\alpha}}A = Q\)
- A modified scaled memoryless symmetric rank-one method
- A new method for parameter estimation of edge-preserving regularization in image restoration
- On the sufficient descent condition of the Hager-Zhang conjugate gradient methods
- Identification of elastic orthotropic material parameters by the singular boundary method
- A preconditioned descent algorithm for variational inequalities of the second kind involving the \(p\)-Laplacian operator
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- An adaptive conjugate gradient algorithm for large-scale unconstrained optimization
- Optimization theory. A concise introduction
- A combined SQP-IPM algorithm for solving large-scale nonlinear optimization problems
- A feasible direction method for the semidefinite program with box constraints
- On solving L-SR1 trust-region subproblems
- Hill-Climbing Algorithm with a Stick for Unconstrained Optimization Problems
- Nonlinear programming techniques for equilibria
- A filter trust region method for solving semi-infinite programming problems
- Parameters estimation for a new anomalous thermal diffusion model in layered media
- Cross-Hill: a heuristic method for global optimization
- On optimality of two adaptive choices for the parameter of Dai-Liao method
- A subspace version of the Wang-Yuan augmented Lagrangian-trust region method for equality constrained optimization
- Numerical research on the sensitivity of nonmonotone trust region algorithms to their parameters
- New nonlinear conjugate gradient methods based on optimal Dai-Liao parameters
- A nonmonotone supermemory gradient algorithm for unconstrained optimization
- A nonmonotone scaled conjugate gradient algorithm for large-scale unconstrained optimization
- A family of iterative methods for computing Moore-Penrose inverse of a matrix
- A class of adaptive dai-liao conjugate gradient methods based on the scaled memoryless BFGS update
- Reconstructing local volatility using total variation
- Title not available (Why is that?)
- Two modifications of the method of the multiplicative parameters in descent gradient methods
- A trust-region method with improved adaptive radius for systems of nonlinear equations
- A comparison of general-purpose optimization algorithms for finding optimal approximate experimental designs
- An ODE-based nonmonotone method for unconstrained optimization problems
- A cone constrained convex program: structure and algorithms
- An efficient modification of the Hestenes-Stiefel nonlinear conjugate gradient method with restart property
- A hybrid optimization method for multiplicative noise and blur removal
- Robust ridge estimator in restricted semiparametric regression models
- A nonmonotone trust region method based on simple conic models for unconstrained optimization
- Two new decomposition algorithms for training bound-constrained support vector machines
- Extended least trimmed squares estimator in semiparametric regression models with correlated errors
- An improved trust region algorithm for nonlinear equations
- A new approximation of the matrix rank function and its application to matrix rank minimization
- A hybrid trust region algorithm for unconstrained optimization
- Trust region algorithm with two subproblems for bound constrained problems
- Title not available (Why is that?)
- A dwindling filter line search method for unconstrained optimization
- Long range search for maximum likelihood in exponential families
- A modified scaling parameter for the memoryless BFGS updating formula
- Nonlinear optimization.
- Two modified scaled nonlinear conjugate gradient methods
- A note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by Andrei
- Practical methods of optimization.
- The modified Levenberg-Marquardt method for nonlinear equations with cubic convergence
- Decentralized swarm coordination: a combined coverage/connectivity approach
- On filter-successive linearization methods for nonlinear semidefinite programming
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- On Newton's method for the Fermat-Weber location problem
- A reduced Hessian algorithm with line search filter method for nonlinear programming
- A descent family of Dai-Liao conjugate gradient methods
- A filter algorithm for nonlinear systems of equalities and inequalities
- A note on ``A new iteration method for the matrix equation \(AX=B\)
- A modified BFGS algorithm based on a hybrid secant equation
- On the worst-case evaluation complexity of non-monotone line search algorithms
- On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae
- On Nesterov's nonsmooth Chebyshev-Rosenbrock functions
- Two new conjugate gradient methods based on modified secant equations
- A framework of constraint preserving update schemes for optimization on Stiefel manifold
- An alternating structured trust region algorithm for separable optimization problems with nonconvex constraints
Uses Software
This page was built for publication: Optimization theory and methods. Nonlinear programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2500511)