Worst case complexity of direct search under convexity (Q5962720): Difference between revisions
From MaRDI portal
Set profile property. |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1007/s10107-014-0847-0 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W1996491707 / rank | |||
Normal rank |
Revision as of 14:30, 19 March 2024
scientific article; zbMATH DE number 6544659
Language | Label | Description | Also known as |
---|---|---|---|
English | Worst case complexity of direct search under convexity |
scientific article; zbMATH DE number 6544659 |
Statements
Worst case complexity of direct search under convexity (English)
0 references
23 February 2016
0 references
This paper considers directional direct-search methods applied to the unconstrained minimization of a real-valued, convex, and continuously differentiable objective function \(f\). It is proved that the direct-search methods of directional type, based on imposing sufficient decrease to accept new iterates, have the same worst case complexity bound and global rate of the gradient method for the unconstrained minimization of a convex and smooth function. The presented results are derived for convex functions where the supreme distance between any point in the initial level set and the solutions set is bounded. Such property is satisfied when the solutions set is bounded, but it is also met in several instances where the solutions sets are unbounded. It is shown that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is at most proportional to the inverse of the threshold. It is also shown that the absolute error in the function values decay at a sublinear rate proportional to the inverse of the iteration counter. Finally, it is proved that the sequence of absolute errors of function values and iterates converges \(r\)-linearly in the strongly convex case.
0 references
derivative-free optimization
0 references
direct search
0 references
worst case complexity
0 references
global rate
0 references
sufficient decrease
0 references
convexity
0 references