Optimization theory and methods. Nonlinear programming
DOI10.1007/B106451zbMATH Open1129.90002OpenAlexW2471988522MaRDI QIDQ2500511FDOQ2500511
Authors: Wenyu Sun, Yaxiang Yuan
Publication date: 17 August 2006
Published in: Springer Optimization and Its Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/b106451
Recommendations
quadratic programmingsequential quadratic programmingnonsmooth optimizationconjugate gradient methodline searchfeasible direction methodspenalty function methods(inexact) Newton method(non-)quasi-Newton methodsnonlinear least-squa\-res problemsself-scaling variable metric methodtheory of constrained optimizationtrust-region and conic model methodstrust-region methods for constrained problems
Quadratic programming (90C20) Methods of successive quadratic programming type (90C55) Nonlinear programming (90C30) Methods of quasi-Newton type (90C53) Methods of reduced gradient type (90C52) Introductory exposition (textbooks, tutorial papers, etc.) pertaining to operations research and mathematical programming (90-01)
Cited In (only showing first 100 items - show all)
- A New First-Order Algorithmic Framework for Optimization Problems with Orthogonality Constraints
- Simplified iteratively regularized Gauss-Newton method in Banach spaces under a general source condition
- Two optimal Hager-Zhang conjugate gradient methods for solving monotone nonlinear equations
- A globally convergent filter-type trust region method for semidefinite programming
- A nonmonotone weighting self-adaptive trust region algorithm for unconstrained nonconvex optimization
- Distributed reconstruction of time-varying graph signals via a modified Newton's method
- A parallel line search subspace correction method for composite convex optimization
- A family of Hager-Zhang conjugate gradient methods for system of monotone nonlinear equations
- A note on a multiplicative parameters gradient method
- A high-order modified Levenberg-Marquardt method for systems of nonlinear equations with fourth-order convergence
- Uniqueness and numerical scheme for the Robin coefficient identification of the time-fractional diffusion equation
- Robust time-domain output error method for identifying continuous-time systems with time delay
- A novel self-adaptive trust region algorithm for unconstrained optimization
- A hybrid scaling parameter for the scaled memoryless BFGS method based on the \(\ell_\infty\) matrix norm
- A derivative-free scaling memoryless DFP method for solving large scale nonlinear monotone equations
- A new derivative-free SCG-type projection method for nonlinear monotone equations with convex constraints
- Global convergence of a modified two-parameter scaled BFGS method with Yuan-Wei-Lu line search for unconstrained optimization
- Two--parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length
- A new hybrid conjugate gradient method for large-scale unconstrained optimization problem with non-convex objective function
- Nonlinear programming. An introduction
- A modified BFGS type quasi-Newton method with line search for symmetric nonlinear equations problems
- A modified descent Polak-Ribiére-Polyak conjugate gradient method with global convergence property for nonconvex functions
- Riemannian conjugate gradient methods with inverse retraction
- A new robust line search technique based on Chebyshev polynomials
- A brief introduction to manifold optimization
- A generalized worst-case complexity analysis for non-monotone line searches
- Gauss-Newton-type methods for bilevel optimization
- Inexact proximal memoryless quasi-Newton methods based on the Broyden family for minimizing composite functions
- Explicit pseudo-transient continuation and the trust-region updating strategy for unconstrained optimization
- A new simple model trust-region method with generalized Barzilai-Borwein parameter for large-scale optimization
- An augmented memoryless BFGS method based on a modified secant equation with application to compressed sensing
- A new CG algorithm based on a scaled memoryless BFGS update with adaptive search strategy, and its application to large-scale unconstrained optimization problems
- Memoryless quasi-Newton methods based on the spectral-scaling Broyden family for Riemannian optimization
- Convergence analysis of simplified iteratively regularized Gauss–Newton method in a Banach space setting
- Two families of scaled three-term conjugate gradient methods with sufficient descent property for nonconvex optimization
- A new family of conjugate gradient methods for unconstrained optimization
- Generalized continuation Newton methods and the trust-region updating strategy for the underdetermined system
- Simultaneous recovery of surface heat flux and thickness of a solid structure by ultrasonic measurements
- Some iterative methods for the largest positive definite solution to a class of nonlinear matrix equation
- A class of derivative-free CG projection methods for nonsmooth equations with an application to the LASSO problem
- A novel diffeomorphic model for image registration and its algorithm
- A QSC method for fractional subdiffusion equations with fractional boundary conditions and its application in parameters identification
- Orthogonal canonical correlation analysis and applications
- A perfect example for the BFGS method
- Cubic interpolation: a line search technique for fuzzy optimization problems
- POD/DEIM reduced-order modeling of time-fractional partial differential equations with applications in parameter identification
- Asymptotic surrogate constraint method and its convergence for a class of semi-infinite programming
- A derivative-free multivariate spectral projection algorithm for constrained nonlinear monotone equations
- A structured diagonal Hessian approximation method with evaluation complexity analysis for nonlinear least squares
- A modified scaled memoryless BFGS preconditioned conjugate gradient algorithm for nonsmooth convex optimization
- Improved optimization methods for image registration problems
- Total variation image restoration method based on subspace optimization
- Title not available (Why is that?)
- Two modifications of the method of the multiplicative parameters in descent gradient methods
- A trust-region method with improved adaptive radius for systems of nonlinear equations
- A comparison of general-purpose optimization algorithms for finding optimal approximate experimental designs
- An ODE-based nonmonotone method for unconstrained optimization problems
- A cone constrained convex program: structure and algorithms
- An efficient modification of the Hestenes-Stiefel nonlinear conjugate gradient method with restart property
- A hybrid optimization method for multiplicative noise and blur removal
- Robust ridge estimator in restricted semiparametric regression models
- A nonmonotone trust region method based on simple conic models for unconstrained optimization
- Two new decomposition algorithms for training bound-constrained support vector machines
- Extended least trimmed squares estimator in semiparametric regression models with correlated errors
- An improved trust region algorithm for nonlinear equations
- A new approximation of the matrix rank function and its application to matrix rank minimization
- A hybrid trust region algorithm for unconstrained optimization
- Trust region algorithm with two subproblems for bound constrained problems
- Title not available (Why is that?)
- A dwindling filter line search method for unconstrained optimization
- Long range search for maximum likelihood in exponential families
- A modified scaling parameter for the memoryless BFGS updating formula
- Nonlinear optimization.
- Two modified scaled nonlinear conjugate gradient methods
- A note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by Andrei
- Practical methods of optimization.
- The modified Levenberg-Marquardt method for nonlinear equations with cubic convergence
- Decentralized swarm coordination: a combined coverage/connectivity approach
- On filter-successive linearization methods for nonlinear semidefinite programming
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- On Newton's method for the Fermat-Weber location problem
- A reduced Hessian algorithm with line search filter method for nonlinear programming
- A descent family of Dai-Liao conjugate gradient methods
- A filter algorithm for nonlinear systems of equalities and inequalities
- A note on ``A new iteration method for the matrix equation \(AX=B\)
- A modified BFGS algorithm based on a hybrid secant equation
- On the worst-case evaluation complexity of non-monotone line search algorithms
- On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae
- On Nesterov's nonsmooth Chebyshev-Rosenbrock functions
- Two new conjugate gradient methods based on modified secant equations
- A framework of constraint preserving update schemes for optimization on Stiefel manifold
- An alternating structured trust region algorithm for separable optimization problems with nonconvex constraints
- A Dai-Liao conjugate gradient method via modified secant equation for system of nonlinear equations
- An improved multi-step gradient-type method for large scale optimization
- An infeasible interior-point algorithm with full-Newton step for linear optimization
- An efficient Dai-Liao type conjugate gradient method by reformulating the CG parameter in the search direction equation
- A filter-line-search method for unconstrained optimization
- Title not available (Why is that?)
- A nonmonotone hybrid method of conjugate gradient and Lanczos-type for solving nonlinear systems
- A quadratic hybridization of Polak-Ribière-Polyak and Fletcher-Reeves conjugate gradient methods
Uses Software
This page was built for publication: Optimization theory and methods. Nonlinear programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2500511)