Nonmonotone line search methods with variable sample size (Q2340358): Difference between revisions
From MaRDI portal
Set profile property. |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1007/s11075-014-9869-1 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2029633462 / rank | |||
Normal rank |
Revision as of 01:57, 20 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Nonmonotone line search methods with variable sample size |
scientific article |
Statements
Nonmonotone line search methods with variable sample size (English)
0 references
16 April 2015
0 references
The paper deals with nonmonotone line search methods for unconstrained optimization. The objective function has the form of mathematical expectation and is approximated by the sample average approximation (SAA) with a large sample of a fixed size. As the function evaluation is expensive, general methods that start with a small sample and increase the sample size throughout the optimization process are usually considered. The aim is to ensure increasing precision during the optimization procedure regardless of the behavior of the objective function. In this paper, the authors introduce and analyze a class of algorithms that combines nonmonotone line search rules with a variable sample size strategy and extend the results developed by \textit{N. Krejić} and \textit{N. Krklec} [J. Comput. Appl. Math. 245, 213--231 (2013; Zbl 1262.65066)]. The sample size may oscillate in each iteration in accordance with the progress made in a decrease of the objective function and the precision measured by an approximate width of the confidence interval. The proposed methods result in approximate solutions for the SAA problem with significantly smaller computational costs than the classical SAA method. A complete algorithm is presented and global convergence results for general search directions are proven. The R-linear rate of convergence is obtained when the gradient of objective function is available and a descent search direction is used at every iteration. Extensive numerical experiments on academic optimization problems in noisy environment as well as on problems with real data are carried out.
0 references
unconstrained minimization
0 references
nonmonotone line search
0 references
sample average approximation
0 references
variable sample size
0 references
algorithms
0 references
global convergence
0 references
local convergence
0 references
numerical experiments
0 references