Nonmonotone line search methods with variable sample size
From MaRDI portal
Recommendations
- A nonmonotone line search method for stochastic optimization problems
- Line search methods with variable sample size for unconstrained optimization
- A nonmonotone line search method for noisy minimization
- Variable-sample methods for stochastic optimization
- Nonmonotone line searches for optimization algorithms
Cites work
- A Nonmonotone Line Search Technique and Its Application to Unconstrained Optimization
- A Nonmonotone Line Search Technique for Newton’s Method
- A class on nonmonotone stabilization methods in unconstrained optimization
- A derivative-free line search and global convergence of Broyden-like method for nonlinear equations
- A derivative-free nonmonotone line search and its application to the spectral residual method
- A derivative-free nonmonotone line-search technique for unconstrained optimization
- A nonmonotone spectral projected gradient method for large-scale topology optimization problems
- An Assessment of Nonmonotone Linesearch Techniques for Unconstrained Optimization
- An adaptive Monte Carlo algorithm for computing mixed logit estimators
- An experimental methodology for response surface optimization methods
- Benchmarking optimization software with performance profiles.
- Convergence theory for nonconvex stochastic programming with an application to mixed logit
- Efficient sample sizes in stochastic nonlinear programming
- Globally convergent Jacobian smoothing inexact Newton methods for NCP
- Globally convergent inexact quasi-Newton methods for solving nonlinear systems
- Hybrid deterministic-stochastic methods for data fitting
- Introduction to Stochastic Search and Optimization
- Introductory lectures on convex optimization. A basic course.
- Line search methods with variable sample size for unconstrained optimization
- Numerical Optimization
- On Choosing Parameters in Retrospective-Approximation Algorithms for Stochastic Root Finding and Simulation Optimization
- On the nonmonotone line search
- On the use of stochastic Hessian information in optimization methods for machine learning
- Optimality functions in stochastic programming
- Sample size selection in optimization methods for machine learning
- Spectral residual method without gradient information for solving large-scale nonlinear systems of equations
- The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem
- Variable-number sample-path optimization
- Variable-sample methods for stochastic optimization
Cited in
(20)- Barzilai–Borwein method with variable sample size for stochastic linear complementarity problems
- Penalty variable sample size method for solving optimization problems with equality constraints in a form of mathematical expectation
- A nonmonotone line search method for stochastic optimization problems
- Inexact restoration with subsampled trust-region methods for finite-sum minimization
- Greedy Sampling Using Nonlinear Optimization
- Tensor Bernstein concentration inequalities with an application to sample estimators for high-order moments
- A stochastic gradient method with variance control and variable learning rate for deep learning
- AN-SPS: adaptive sample size nonmonotone line search spectral projected subgradient method for convex constrained optimization problems
- Subsampled first-order optimization methods with applications in imaging
- Spectral projected gradient method for stochastic optimization
- Inexact restoration approach for minimization with inexact evaluation of the objective function
- A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients
- Variable-sample methods for stochastic optimization
- Variable sample size method for equality constrained optimization problems
- A non-monotone trust-region method with noisy oracles and additional sampling
- A nonmonotone line search method for noisy minimization
- Newton-like method with diagonal correction for distributed optimization
- Subsampled nonmonotone spectral gradient methods
- Line search methods with variable sample size for unconstrained optimization
- On the employment of inexact restoration for the minimization of functions whose evaluation is subject to errors
This page was built for publication: Nonmonotone line search methods with variable sample size
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2340358)