Nonmonotone line search methods with variable sample size (Q2340358): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Changed an Item
Normalize DOI.
 
(4 intermediate revisions by 4 users not shown)
Property / DOI
 
Property / DOI: 10.1007/s11075-014-9869-1 / rank
Normal rank
 
Property / describes a project that uses
 
Property / describes a project that uses: AMLET / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s11075-014-9869-1 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2029633462 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimality functions in stochastic programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: An adaptive Monte Carlo algorithm for computing mixed logit estimators / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence theory for nonconvex stochastic programming with an application to mixed logit / rank
 
Normal rank
Property / cites work
 
Property / cites work: Globally convergent inexact quasi-Newton methods for solving nonlinear systems / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sample size selection in optimization methods for machine learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: A derivative-free nonmonotone line search and its application to the spectral residual method / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the nonmonotone line search / rank
 
Normal rank
Property / cites work
 
Property / cites work: Variable-number sample-path optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: A derivative-free nonmonotone line-search technique for unconstrained optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Benchmarking optimization software with performance profiles. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Hybrid Deterministic-Stochastic Methods for Data Fitting / rank
 
Normal rank
Property / cites work
 
Property / cites work: An experimental methodology for response surface optimization methods / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Nonmonotone Line Search Technique for Newton’s Method / rank
 
Normal rank
Property / cites work
 
Property / cites work: A class on nonmonotone stabilization methods in unconstrained optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Variable-sample methods for stochastic optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Line search methods with variable sample size for unconstrained optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Globally convergent Jacobian smoothing inexact Newton methods for NCP / rank
 
Normal rank
Property / cites work
 
Property / cites work: Spectral residual method without gradient information for solving large-scale nonlinear systems of equations / rank
 
Normal rank
Property / cites work
 
Property / cites work: A derivative-free line search and global convergence of Broyden-like method for nonlinear equations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Introductory lectures on convex optimization. A basic course. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Numerical Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: On Choosing Parameters in Retrospective-Approximation Algorithms for Stochastic Root Finding and Simulation Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Efficient sample sizes in stochastic nonlinear programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem / rank
 
Normal rank
Property / cites work
 
Property / cites work: An Assessment of Nonmonotone Linesearch Techniques for Unconstrained Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Introduction to Stochastic Search and Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: A nonmonotone spectral projected gradient method for large-scale topology optimization problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Nonmonotone Line Search Technique and Its Application to Unconstrained Optimization / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1007/S11075-014-9869-1 / rank
 
Normal rank

Latest revision as of 02:13, 18 December 2024

scientific article
Language Label Description Also known as
English
Nonmonotone line search methods with variable sample size
scientific article

    Statements

    Nonmonotone line search methods with variable sample size (English)
    0 references
    16 April 2015
    0 references
    The paper deals with nonmonotone line search methods for unconstrained optimization. The objective function has the form of mathematical expectation and is approximated by the sample average approximation (SAA) with a large sample of a fixed size. As the function evaluation is expensive, general methods that start with a small sample and increase the sample size throughout the optimization process are usually considered. The aim is to ensure increasing precision during the optimization procedure regardless of the behavior of the objective function. In this paper, the authors introduce and analyze a class of algorithms that combines nonmonotone line search rules with a variable sample size strategy and extend the results developed by \textit{N. Krejić} and \textit{N. Krklec} [J. Comput. Appl. Math. 245, 213--231 (2013; Zbl 1262.65066)]. The sample size may oscillate in each iteration in accordance with the progress made in a decrease of the objective function and the precision measured by an approximate width of the confidence interval. The proposed methods result in approximate solutions for the SAA problem with significantly smaller computational costs than the classical SAA method. A complete algorithm is presented and global convergence results for general search directions are proven. The R-linear rate of convergence is obtained when the gradient of objective function is available and a descent search direction is used at every iteration. Extensive numerical experiments on academic optimization problems in noisy environment as well as on problems with real data are carried out.
    0 references
    unconstrained minimization
    0 references
    nonmonotone line search
    0 references
    sample average approximation
    0 references
    variable sample size
    0 references
    algorithms
    0 references
    global convergence
    0 references
    local convergence
    0 references
    numerical experiments
    0 references
    0 references
    0 references

    Identifiers