The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations

From MaRDI portal
Publication:5629147


DOI10.1093/imamat/6.1.76zbMath0223.65023WikidataQ55980586 ScholiaQ55980586MaRDI QIDQ5629147

Charles Broyden

Publication date: 1970

Published in: IMA Journal of Applied Mathematics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1093/imamat/6.1.76


65K05: Numerical mathematical programming methods

90C30: Nonlinear programming

90C20: Quadratic programming


Related Items

Optimal Design of Vibration Absorbers, Symmetric minimum-norm updates for use in gibbs free energy calculations, Recent advances in unconstrained optimization, Quasi Newton techniques generate identical points II: The proofs of four new theorems, An efficient conjugate direction method with orthogonalization for large-scale quadratic optimization problems, New optimization approach to multiphase flow., A parallel unconstrained quasi-Newton algorithm and its performance on a local memory parallel computer, A global optimization problem in portfolio selection, A logistic approach to knowledge structures, Vector generalized linear and additive extreme value models, Computation of time-periodic solutions of the Benjamin-Ono equation, The multidimensional moment-constrained maximum entropy problem: A BFGS algorithm with constraint scaling, A cut-peak function method for global optimization, A variable-metric method using a nonquadratic model, Substitute derivatives in unconstrained optimization: A comparison of finite difference and response surface approximations, Constrained optimization with normed moving truncations penalty-functions, Local and superlinear convergence of a class of variable metric methods, A compact updating formula for quasi-Newton minimization algorithms, On the rate of superlinear convergence of a class of variable metric methods, Superlinear convergence of symmetric Huang's class of methods, The global convergence of partitioned BFGS on problems with convex decompositions and Lipschitzian gradients, Variable metric methods in Hilbert space with applications to control problems, Numerical experiments on DFP-method, a powerful function minimization technique, A conjugate direction algorithm without line searches, Approximation methods for the unconstrained optimization, On the relation between quadratic termination and convergence properties of minimization algorithms. Part I. Theory, On the relation between quadratic termination and convergence properties of minimization algorithms. Part II. Applications, Unified approach to unconstrained minimization via basic matrix factorizations, A comparison of nonlinear optimization methods for supervised learning in multilayer feedforward neural networks, Computational experience with known variable metric updates, The least prior deviation quasi-Newton update, The linear algebra of block quasi-Newton algorithms, Variable metric methods for unconstrained optimization and nonlinear least squares, Using Fisher scoring to fit extended Poisson process models, Mechanical system modelling using recurrent neural networks via quasi- Newton learning methods, Automatic structure and parameter training methods for modeling of mechanical systems by recurrent neural networks., Data assimilation by field alignment, Annealing stochastic approximation Monte Carlo algorithm for neural network training, Semideterministic global optimization method: Application to a control problem of the Burgers equation, Variable metric algorithms: Necessary and sufficient conditions for identical behaviour of nonquadratic functions, Stability of Huang's update for the conjugate gradient method, Unnamed Item, The convergence of variable metric matrices in unconstrained optimization, A two-phase parameter estimation method for radiative transfer problems in paper industry applications, Unnamed Item, Experimental Investigation of Local Searches for Optimization of Grillage-Type Foundations, An alternative variational principle for variable metric updating, Parallel variable metric algorithms for unconstrained optimization, Superlinear convergence of Broyden's boundedθ-class of methods, Conjugate direction methods with variable storage, Optimally conditioned optimization algorithms without line searches, On the convergence rate of imperfect minimization algorithms in Broyden'sβ-class, A family of variable metric updates, Optimal conditioning in the convex class of rank two updates