Minimization of functions having Lipschitz continuous first partial derivatives

From MaRDI portal
Publication:2541105


DOI10.2140/pjm.1966.16.1zbMath0202.46105WikidataQ61603143 ScholiaQ61603143MaRDI QIDQ2541105

L. Armijo

Publication date: 1966

Published in: Pacific Journal of Mathematics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.2140/pjm.1966.16.1



Related Items

Unnamed Item, Newton-type Methods with Generalized Distances For Constrained Optimization, A variant of korpelevich’s method for variational inequalities with a new search strategy, A dual scheme for traffic assignment problems, On the acceleration of the backpropagation training method, Global convergece of the bfgs algorithm with nonmonotone linesearchthis work is supported by national natural science foundation$ef:, Choice of a step-length in an almost everywhere differentiable (on every direction) (almost everywhere locally lipschitz) lower-semi-continuous minimization problem, Projected dynamical systems modeling and computation of spatial network equilibria, Optimizing Frequencies in a Transit Network: a Nonlinear Bi‐level Programming Approach, Partitioned quasi-Newton methods for nonlinear equality constrained optimization, Mesh independence of Newton-like methods for infinite dimensional problems, A barrier function method for minimax problems, A new conic method for unconstrained minimization, A generalized quadratic programming-based phase I--phase II method for inequality-constrained optimization, A globally convergent algorithm for the Euclidean multiplicity location problem, Optimal low-order controller design via `LQG-like' parametrization, On the rate of convergence of two minimax algorithms, Partial linearization methods in nonlinear programming, Optimization algorithm with probabilistic estimation, Analysis and implementation of a dual algorithm for constrained optimization, On the use of consistent approximations in the solution of semi-infinite optimization and optimal control problems, An adaptive conjugate gradient learning algorithm for efficient training of neural networks, A globally convergent algorithm for facility location on a sphere, A class of gap functions for variational inequalities, On the convergence of interior-reflective Newton methods for nonlinear minimization subject to bounds, Descent methods with linesearch in the presence of perturbations, Experiments with new stochastic global optimization search techniques, Convergence of implementable descent algorithms for unconstrained optimization, A globalization procedure for solving nonlinear systems of equations, Efficent line search algorithm for unconstrained optimization, An extension of the partial credit model with an application to the measurement of change, Probability-theoretical generalization of the second Lyapunov method, Iterative processes: A survey of convergence theory using Lyapunov second method, A dimension-reducing method for unconstrained optimization, OPTAC: A portable software package for analyzing and comparing optimization methods by visualization, A class of gradient unconstrained minimization algorithms with adaptive stepsize, An adaptive Gauss-Newton algorithm for training multilayer nonlinear filters that have embedded memory, A pathsearch damped Newton method for computing general equilibria, Convergence of the steepest descent method for minimizing quasiconvex functions, Unnamed Item