Worst case complexity of direct search under convexity (Q5962720): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
(One intermediate revision by one other user not shown)
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s10107-014-0847-0 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W1996491707 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Oracle Complexity of First-Order and Derivative-Free Algorithms for Smooth Nonconvex Minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Introduction to Derivative-Free Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Local Convergence of Pattern Search / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Smoothing and worst-case complexity for direct-search methods in nonsmooth optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: CUTEr and SifDec / rank
 
Normal rank
Property / cites work
 
Property / cites work: Recursive Trust-Region Methods for Multiscale Nonlinear Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4435425 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Introductory lectures on convex optimization. A basic course. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Random gradient-free minimization of convex functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Cubic regularization of Newton method and its global performance / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5491447 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convex Analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Worst case complexity of direct search / rank
 
Normal rank

Latest revision as of 11:11, 11 July 2024

scientific article; zbMATH DE number 6544659
Language Label Description Also known as
English
Worst case complexity of direct search under convexity
scientific article; zbMATH DE number 6544659

    Statements

    Worst case complexity of direct search under convexity (English)
    0 references
    0 references
    0 references
    23 February 2016
    0 references
    This paper considers directional direct-search methods applied to the unconstrained minimization of a real-valued, convex, and continuously differentiable objective function \(f\). It is proved that the direct-search methods of directional type, based on imposing sufficient decrease to accept new iterates, have the same worst case complexity bound and global rate of the gradient method for the unconstrained minimization of a convex and smooth function. The presented results are derived for convex functions where the supreme distance between any point in the initial level set and the solutions set is bounded. Such property is satisfied when the solutions set is bounded, but it is also met in several instances where the solutions sets are unbounded. It is shown that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is at most proportional to the inverse of the threshold. It is also shown that the absolute error in the function values decay at a sublinear rate proportional to the inverse of the iteration counter. Finally, it is proved that the sequence of absolute errors of function values and iterates converges \(r\)-linearly in the strongly convex case.
    0 references
    derivative-free optimization
    0 references
    direct search
    0 references
    worst case complexity
    0 references
    global rate
    0 references
    sufficient decrease
    0 references
    convexity
    0 references

    Identifiers