Global optimization by random perturbation of the gradient method with a fixed parameter (Q1337130): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Global optimization and stochastic differential equations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3347035 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3998610 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Simulated annealing type algorithms for multivariate optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Recursive Stochastic Algorithms for Global Optimization in $\mathbb{R}^d $ / rank
 
Normal rank
Property / cites work
 
Property / cites work: Diffusions for Global Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Asymptotic Global Behavior for Stochastic Approximation and Diffusions with Slowly Decreasing Noise Effects: Global Minimization via Monte Carlo / rank
 
Normal rank
Property / cites work
 
Property / cites work: On a numerical solution of a class of partial differential equations of mixed type, not adding artificial terms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3311717 / rank
 
Normal rank

Latest revision as of 10:04, 23 May 2024

scientific article
Language Label Description Also known as
English
Global optimization by random perturbation of the gradient method with a fixed parameter
scientific article

    Statements

    Global optimization by random perturbation of the gradient method with a fixed parameter (English)
    0 references
    0 references
    30 October 1994
    0 references
    An objective function is supposed multimodal, bounded and differentiable, and a feasible region is a ball in Euclidean space. The algorithm is the implementation of a randomly perturbed gradient method. The perturbation is a random vector \(Z\) multiplied by the decreasing factor converging to zero where \(Z\) almost surely belongs to the feasible region. Convergence with probability 1 is proved. Results of experiments are reported.
    0 references
    0 references
    global optimization
    0 references
    Monte Carlo methods
    0 references
    randomly perturbed gradient method
    0 references
    0 references
    0 references