A stochastic steepest-descent algorithm (Q1093541): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
ReferenceBot (talk | contribs)
Changed an Item
(One intermediate revision by one other user not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / cites work
 
Property / cites work: An Adaptive Precision Gradient Method for Optimal Control / rank
 
Normal rank
Property / cites work
 
Property / cites work: A stochastic steepest-descent algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Minimization of functions having Lipschitz continuous first partial derivatives / rank
 
Normal rank
Property / cites work
 
Property / cites work: Computational methods in optimization. A unified approach. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic approximation algorithms for the local optimization of functions with nonunique stationary points / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic approximation type methods for constrained systems: Algorithms and numerical results / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic approximation algorithms for constrained optimization problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic Estimation of the Maximum of a Regression Function / rank
 
Normal rank

Revision as of 12:58, 18 June 2024

scientific article
Language Label Description Also known as
English
A stochastic steepest-descent algorithm
scientific article

    Statements

    A stochastic steepest-descent algorithm (English)
    0 references
    0 references
    0 references
    1988
    0 references
    A stochastic steepest-descent algorithm for function minimization under noisy observations is presented. Function evaluation is done by performing a number of random experiments on a suitable probability space. The number of experiments performed at a point generated by the algorithm reflects a balance between the conflicting requirements of accuracy and computational complexity. The algorithm uses an adaptive precision scheme to determine the number of random experiments at a point; this number tends to increase whenever a stationary point is approached, and to decrease otherwise. Two rules are used to determine the number of random experiments at a point: one, in the inner-loop of the algorithm, uses the magnitude of the observed gradient of the function to be minimized; and the other, in the outer-loop, uses a measure of accumulated errors in function evaluations at past points generated by the algorithm. Once a stochastic approximation of the function to be minimized is obtained at a point, the algorithm proceeds to generate the next point by using the steepest-descent deterministic methods of Armijo and Polak. Convergence of the algorithm to stationary points is demonstrated under suitable assumptions.
    0 references
    0 references
    0 references
    0 references
    0 references
    Armijo stepsize
    0 references
    stochastic steepest-descent algorithm
    0 references
    noisy observations
    0 references
    adaptive precision scheme
    0 references
    stochastic approximation
    0 references
    stationary points
    0 references