Optimization theory and methods. Nonlinear programming
DOI10.1007/B106451zbMATH Open1129.90002OpenAlexW2471988522MaRDI QIDQ2500511FDOQ2500511
Authors: Wenyu Sun, Yaxiang Yuan
Publication date: 17 August 2006
Published in: Springer Optimization and Its Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/b106451
Recommendations
quadratic programmingsequential quadratic programmingnonsmooth optimizationconjugate gradient methodline searchfeasible direction methodspenalty function methods(inexact) Newton method(non-)quasi-Newton methodsnonlinear least-squa\-res problemsself-scaling variable metric methodtheory of constrained optimizationtrust-region and conic model methodstrust-region methods for constrained problems
Quadratic programming (90C20) Methods of successive quadratic programming type (90C55) Nonlinear programming (90C30) Methods of quasi-Newton type (90C53) Methods of reduced gradient type (90C52) Introductory exposition (textbooks, tutorial papers, etc.) pertaining to operations research and mathematical programming (90-01)
Cited In (only showing first 100 items - show all)
- Title not available (Why is that?)
- Two modifications of the method of the multiplicative parameters in descent gradient methods
- A trust-region method with improved adaptive radius for systems of nonlinear equations
- A comparison of general-purpose optimization algorithms for finding optimal approximate experimental designs
- An ODE-based nonmonotone method for unconstrained optimization problems
- A cone constrained convex program: structure and algorithms
- An efficient modification of the Hestenes-Stiefel nonlinear conjugate gradient method with restart property
- A hybrid optimization method for multiplicative noise and blur removal
- Robust ridge estimator in restricted semiparametric regression models
- A nonmonotone trust region method based on simple conic models for unconstrained optimization
- Two new decomposition algorithms for training bound-constrained support vector machines
- Extended least trimmed squares estimator in semiparametric regression models with correlated errors
- An improved trust region algorithm for nonlinear equations
- A new approximation of the matrix rank function and its application to matrix rank minimization
- A hybrid trust region algorithm for unconstrained optimization
- Trust region algorithm with two subproblems for bound constrained problems
- Title not available (Why is that?)
- A dwindling filter line search method for unconstrained optimization
- Long range search for maximum likelihood in exponential families
- A modified scaling parameter for the memoryless BFGS updating formula
- Nonlinear optimization.
- Two modified scaled nonlinear conjugate gradient methods
- A note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by Andrei
- Practical methods of optimization.
- The modified Levenberg-Marquardt method for nonlinear equations with cubic convergence
- Decentralized swarm coordination: a combined coverage/connectivity approach
- On filter-successive linearization methods for nonlinear semidefinite programming
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- On Newton's method for the Fermat-Weber location problem
- A reduced Hessian algorithm with line search filter method for nonlinear programming
- A descent family of Dai-Liao conjugate gradient methods
- A filter algorithm for nonlinear systems of equalities and inequalities
- A note on ``A new iteration method for the matrix equation \(AX=B\)
- A modified BFGS algorithm based on a hybrid secant equation
- On the worst-case evaluation complexity of non-monotone line search algorithms
- On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae
- On Nesterov's nonsmooth Chebyshev-Rosenbrock functions
- Two new conjugate gradient methods based on modified secant equations
- A framework of constraint preserving update schemes for optimization on Stiefel manifold
- An alternating structured trust region algorithm for separable optimization problems with nonconvex constraints
- A Dai-Liao conjugate gradient method via modified secant equation for system of nonlinear equations
- An improved multi-step gradient-type method for large scale optimization
- An infeasible interior-point algorithm with full-Newton step for linear optimization
- An efficient Dai-Liao type conjugate gradient method by reformulating the CG parameter in the search direction equation
- A filter-line-search method for unconstrained optimization
- Title not available (Why is that?)
- A nonmonotone hybrid method of conjugate gradient and Lanczos-type for solving nonlinear systems
- A quadratic hybridization of Polak-Ribière-Polyak and Fletcher-Reeves conjugate gradient methods
- Accelerated gradient descent methods with line search
- A feasible method for optimization with orthogonality constraints
- Numerical multilinear algebra and its applications
- Linearly structured quadratic model updating using partial incomplete eigendata
- Hybridization of accelerated gradient descent method
- An unconstrained optimization method using nonmonotone second order Goldstein's line search
- Optimal radiation fractionation for low-grade gliomas: insights from a mathematical model
- Introduction to nonlinear optimization: theory, algorithms, and applications with MATLAB
- A Barzilai-Borwein conjugate gradient method
- Numerical optimization. Theoretical and practical aspects. Transl. from the French
- A self-adjusting conjugate gradient method with sufficient descent condition and conjugacy condition
- Visual MISER: an efficient user-friendly visual program for solving optimal control problems
- Calculating the normalising constant of the Bingham distribution on the sphere using the holonomic gradient method
- A brief survey of methods for solving nonlinear least-squares problems
- The Dai-Liao nonlinear conjugate gradient method with optimal parameter choices
- A derivative-free algorithm for spherically constrained optimization
- A descent extension of the Polak-Ribière-Polyak conjugate gradient method
- Methods of unconstrained optimization
- A new method of moving asymptotes for large-scale unconstrained optimization
- A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization
- Limited memory BFGS method based on a high-order tensor model
- On the iteratively regularized Gauss-Newton method in Banach spaces with applications to parameter identification problems
- New quasi-Newton methods via higher order tensor models
- Two effective hybrid conjugate gradient algorithms based on modified BFGS updates
- Ensemble preconditioning for Markov chain Monte Carlo simulation
- A regularized Newton method for computing ground states of Bose-Einstein condensates
- A modified trust region method with beale's PCG technique for optimization
- Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization
- A New Dai-Liao Conjugate Gradient Method with Optimal Parameter Choice
- Nonlinear conjugate gradient methods with sufficient descent condition for large-scale unconstrained optimization
- Title not available (Why is that?)
- Nonlinear Programming
- An accelerated nonmonotone trust region method with adaptive trust region for unconstrained optimization
- A quadratic penalty method for hypergraph matching
- An adaptive scaled BFGS method for unconstrained optimization
- A modified three-term conjugate gradient method with sufficient descent property
- An equivalency condition of nonsingularity in nonlinear semidefinite programming
- Numerical treatment of nonlinear MHD Jeffery-Hamel problems using stochastic algorithms
- A new subspace correction method for nonlinear unconstrained convex optimization problems
- Two extensions of the Dai-Liao method with sufficient descent property based on a penalization scheme
- Multi-modality image registration models and efficient algorithms
- A new supermemory gradient method for unconstrained optimization problems
- Quadratic interpolation technique to minimize univariable fuzzy functions
- Two adaptive Dai-Liao nonlinear conjugate gradient methods
- On \(R\)-linear convergence analysis for a class of gradient methods
- Two optimal Dai-Liao conjugate gradient methods
- Nonmonotone conic trust region method with line search technique for bound constrained optimization
- A dwindling filter line search algorithm for nonlinear equality constrained optimization
- The \(d\)-level nested logit model: assortment and price optimization problems
- Inhomogeneous polynomial optimization over a convex set: an approximation approach
- A modified nonmonotone trust region line search method
- A linear hybridization of the Hestenes-Stiefel method and the memoryless BFGS technique
Uses Software
This page was built for publication: Optimization theory and methods. Nonlinear programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2500511)