Matrix conditioning and nonlinear optimization
From MaRDI portal
Publication:4147882
DOI10.1007/BF01588962zbMATH Open0371.90109OpenAlexW2035079355MaRDI QIDQ4147882FDOQ4147882
Authors: David F. Shanno, Kang Hoh Phua
Publication date: 1978
Published in: Mathematical Programming (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/bf01588962
Cites Work
- Algorithm 500: Minimization of Unconstrained Multivariate Functions [E4]
- A Family of Variable-Metric Methods Derived by Variational Means
- The Convergence of a Class of Double-rank Minimization Algorithms
- A new approach to variable metric algorithms
- Conditioning of Quasi-Newton Methods for Function Minimization
- Self-Scaling Variable Metric (SSVM) Algorithms
- Self-Scaling Variable Metric (SSVM) Algorithms
- Optimal conditioning of self-scaling variable Metric algorithms
- On the selection of parameters in Self Scaling Variable Metric Algorithms
- Quasi-Newton Methods and their Application to Function Minimisation
- Optimally conditioned optimization algorithms without line searches
- Title not available (Why is that?)
- An algorithm that minimizes homogeneous functions of \(n\) variables in \(n + 2\) iterations and rapidly minimizes general functions
- Methods of conjugate directions versus quasi-Newton methods
Cited In (84)
- Numerical expirience with a class of self-scaling quasi-Newton algorithms
- Wide interval for efficient self-scaling quasi-Newton algorithms
- A new and dynamic method for unconstrained minimization
- A new restarting adaptive trust-region method for unconstrained optimization
- A limited memory quasi-Newton trust-region method for box constrained optimization
- Perspectives on self-scaling variable metric algorithms
- An assessment of quasi-Newton sparse update techniques for nonlinear structural analysis
- A double parameter scaled BFGS method for unconstrained optimization
- Preconditioned low-order Newton methods
- Computational experience with known variable metric updates
- Family of optimally conditioned quasi-Newton updates for unconstrained optimization
- The application of optimal control methodology to nonlinear programming problems
- On large scale nonlinear network optimization
- Scaled memoryless symmetric rank one method for large-scale optimization
- A Sparse Quasi-Newton Update Derived Variationally with a Nondiagonally Weighted Frobenius Norm
- A trust-region strategy for minimization on arbitrary domains
- Shifted limited-memory variable metric methods for large-scale unconstrained optimization
- A double parameter self-scaling memoryless BFGS method for unconstrained optimization
- Two modified scaled nonlinear conjugate gradient methods
- On the performance of switching BFGS/SR1 algorithms for unconstrained optimization
- Scaling damped limited-memory updates for unconstrained optimization
- An improved adaptive trust-region method for unconstrained optimization
- Computational aspects of large elastoplastic deformations in the presence of anisotropy and plastic spin
- New combined method for unconstrained minimization
- Parallel quasi-Newton methods for unconstrained optimization
- Analysis of a self-scaling quasi-Newton method
- Title not available (Why is that?)
- Implicit updates in multistep quasi-Newton methods
- Mechanical system modelling using recurrent neural networks via quasi- Newton learning methods
- Extra multistep BFGS updates in quasi-Newton methods
- Broyden's quasi-Newton methods for a nonlinear system of equations and unconstrained optimization: a review and open problems
- Alternating multi-step quasi-Newton methods for unconstrained optimization
- Variationally derived scaling and variable metric updates from the preconvex part of the Broyden family
- On the behaviour of a combined extra-updating/self-scaling BFGS method
- An efficient implementation of a trust region method for box constrained optimization
- Computational experiments with scaled initial hessian approximation for the broyden family methods∗
- Sizing the BFGS and DFP updates: Numerical study
- A regularized limited memory BFGS method for large-scale unconstrained optimization and its efficient implementations
- Extra updates for the bfgs method∗
- An efficient method for nonlinearly constrained networks
- New implicit updates in multi-step quasi-Newton methods for unconstrained optimisation
- Using function-values in multi-step quasi-Newton methods
- Impulse noise removal by an adaptive trust-region method
- Partitioned variable metric updates for large structured optimization problems
- Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization
- Structured symmetric rank-one method for unconstrained optimization
- Variable metric methods for unconstrained optimization and nonlinear least squares
- A fast and robust unconstrained optimization method requiring minimum storage
- A restarting approach for the symmetric rank one update for unconstrained optimization
- A dense initialization for limited-memory quasi-Newton methods
- A symmetric rank-one method based on extra updating techniques for unconstrained optimization
- An example of numerical nonconvergence of a variable-metric method
- On diagonally-preconditioning the 2-step BFGS method with accumulated steps for linearly constrained nonlinear programming
- On diagonally preconditioning the truncated Newton method for super-scale linearly constrained nonlinear prrogramming
- On the limited memory BFGS method for large scale optimization
- Three-step fixed-point quasi-Newton methods for unconstrained optimisation
- Family of projected descent methods for optimization problems with simple bounds
- Testing a Class of Methods for Solving Minimization Problems with Simple Bounds on the Variables
- A parallel unconstrained quasi-Newton algorithm and its performance on a local memory parallel computer
- A new arc algorithm for unconstrained optimization
- The global convergence of self-scaling BFGS algorithm with non-monotone line search for unconstrained nonconvex optimization problems
- Adding variables to quasi-newton Hessian approximations
- A compact variable metric algorithm for nonlinear minimax approximation
- The use of alternation and recurrences in two-step quasi-Newton methods
- Some numerical experiments with variable-storage quasi-Newton algorithms
- Numerical comparison of several variable metric algorithms
- A nonlinear model for function-value multistep methods
- Cubic regularization in symmetric rank-1 quasi-Newton methods
- HARES: An efficient method for first-principles electronic structure calculations of complex systems
- Vectorization of conjugate-gradient methods for large-scale minimization in meteorology
- Global convergence property of scaled two-step BFGS method
- Numerical experience with multiple update quasi-Newton methods for unconstrained optimization
- A rank-one fitting algorithm for unconstrained optimization problems.
- Maintaining the positive definiteness of the matrices in reduced secant methods for equality constrained optimization
- Convergence analysis of the self-dual optimally conditioned ssvm method of oren-spedicato
- Title not available (Why is that?)
- A double-parameter scaling Broyden-Fletcher-Goldfarb-Shanno method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimization
- Scaled nonlinear conjugate gradient methods for nonlinear least squares problems
- How much do approximate derivatives hurt filter methods?
- L-Broyden methods: a generalization of the L-BFGS method to the limited-memory Broyden family
- On quasi-Newton methods in fast Fourier transform-based micromechanics
- A Kantorovich theorem for the structured PSB update in Hilbert space.
- Low rank updates in preconditioning the saddle point systems arising from data assimilation problems
- A trust-region method using extended nonmonotone technique for unconstrained optimization
This page was built for publication: Matrix conditioning and nonlinear optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4147882)