Sample size selection in optimization methods for machine learning
DOI10.1007/S10107-012-0572-5zbMATH Open1252.49044OpenAlexW2061570747WikidataQ105583393 ScholiaQ105583393MaRDI QIDQ715253FDOQ715253
Authors: Gillian M. Chin, Yuchen Wu, R. H. Byrd, Jorge Nocedal
Publication date: 2 November 2012
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-012-0572-5
Recommendations
- On the use of stochastic Hessian information in optimization methods for machine learning
- Large-scale machine learning with stochastic gradient descent
- Adaptive sampling strategies for stochastic optimization
- Adaptive sampling for incremental optimization using stochastic gradient descent
- Survey of solving the optimization problems for sparse learning
Numerical mathematical programming methods (65K05) Learning and adaptive systems in artificial intelligence (68T05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Newton-type methods (49M15)
Cites Work
- Newton's Method for Large Bound-Constrained Optimization Problems
- An adaptive Monte Carlo algorithm for computing mixed logit estimators
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Acceleration of Stochastic Approximation by Averaging
- A Stochastic Approximation Method
- Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming
- Primal-dual subgradient methods for convex problems
- On the use of stochastic Hessian information in optimization methods for machine learning
- The conjugate gradient method in extremal problems
- De-noising by soft-thresholding
- Sparse Reconstruction by Separable Approximation
- A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds
- The sample average approximation method for stochastic discrete optimization
- Dual averaging methods for regularized stochastic learning and online optimization
- Accelerated block-coordinate relaxation for regularized optimization
- A New Active Set Algorithm for Box Constrained Optimization
- A simulation-based approach to two-stage stochastic programming with recourse
- Efficient online and batch learning using forward backward splitting
- On the rate of convergence of optimal solutions of Monte Carlo approximations of stochastic programs
- On the Goldstein-Levitin-Polyak gradient projection method
- Optimal distributed online prediction using mini-batches
- Variable-number sample-path optimization
- Convergence Analysis of Stochastic Algorithms
- Title not available (Why is that?)
Cited In (84)
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- A trust region method for noisy unconstrained optimization
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
- Parallel optimization techniques for machine learning
- On the use of stochastic Hessian information in optimization methods for machine learning
- Linesearch Newton-CG methods for convex optimization with noise
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Sub-sampled Newton methods
- An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration
- Convergence of the reweighted \(\ell_1\) minimization algorithm for \(\ell_2-\ell_p\) minimization
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- Newton-like method with diagonal correction for distributed optimization
- Variance-based extragradient methods with line search for stochastic variational inequalities
- Nonmonotone line search methods with variable sample size
- Second-order orthant-based methods with enriched Hessian information for sparse \(\ell _1\)-optimization
- Randomized approach to nonlinear inversion combining random and optimized simultaneous sources and detectors
- Algorithms for Kullback-Leibler approximation of probability measures in infinite dimensions
- Optimization methods for large-scale machine learning
- Adaptive deep learning for high-dimensional Hamilton-Jacobi-Bellman equations
- Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations
- Accelerating mini-batch SARAH by step size rules
- Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming
- Asynchronous schemes for stochastic and misspecified potential games and nonconvex optimization
- Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches
- An inexact successive quadratic approximation method for L-1 regularized optimization
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- A second-order method for convex \(\ell_1\)-regularized optimization with active-set prediction
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- On sampling rates in simulation-based recursions
- Subsampled nonmonotone spectral gradient methods
- A robust multi-batch L-BFGS method for machine learning
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- Descent direction method with line search for unconstrained optimization in noisy environment
- Probability maximization via Minkowski functionals: convex representations and tractable resolution
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Spectral projected gradient method for stochastic optimization
- An overview of stochastic quasi-Newton methods for large-scale machine learning
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- A fully stochastic second-order trust region method
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians
- Adaptive sampling strategies for stochastic optimization
- A nonmonotone line search method for stochastic optimization problems
- A stochastic quasi-Newton method for large-scale optimization
- Risk-averse design of tall buildings for uncertain wind conditions
- Statistically equivalent surrogate material models: impact of random imperfections on the elasto-plastic response
- Adaptive stochastic approximation algorithm
- Gradient-based adaptive stochastic search for simulation optimization over continuous space
- Stable architectures for deep neural networks
- Title not available (Why is that?)
- A stochastic line search method with expected complexity analysis
- Accelerating deep neural network training with inconsistent stochastic gradient descent
- Convergence of Newton-MR under inexact Hessian information
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Solving inverse problems using data-driven models
- Batched Stochastic Gradient Descent with Weighted Sampling
- Estimating the algorithmic variance of randomized ensembles via the bootstrap
- Ritz-like values in steplength selections for stochastic gradient methods
- Inexact restoration with subsampled trust-region methods for finite-sum minimization
- A subspace-accelerated split Bregman method for sparse data recovery with joint \(\ell_1\)-type regularizers
- A count sketch maximal weighted residual Kaczmarz method for solving highly overdetermined linear systems
- A deep learning semiparametric regression for adjusting complex confounding structures
- Deep learning for trivial inverse problems
- Robust data sampling in machine learning: a game-theoretic framework for training and validation data selection
- A stochastic variance reduced gradient method with adaptive step for stochastic optimization
- Hessian averaging in stochastic Newton methods achieves superlinear convergence
- Adaptive sampling stochastic multigradient algorithm for stochastic multiobjective optimization
- Quantity optimization of virtual sample generation with two kinds of upper bound conditions
- A stochastic gradient method with variance control and variable learning rate for deep learning
- Estimating absorption and scattering in quantitative photoacoustic tomography with an adaptive Monte Carlo method for light transport
- The sparse Kaczmarz method with surrogate hyperplane for the regularized basis pursuit problem
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
- Subsampled first-order optimization methods with applications in imaging
- First- and second-order high probability complexity bounds for trust-region methods with noisy oracles
- A line search based proximal stochastic gradient algorithm with dynamical variance reduction
- An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints
- Gradient-based optimisation of the conditional-value-at-risk using the multi-level Monte Carlo method
- Bolstering stochastic gradient descent with model building
- An investigation of stochastic trust-region based algorithms for finite-sum minimization
- A framework of convergence analysis of mini-batch stochastic projected gradient methods
- A multilevel method for self-concordant minimization
- A proximal stochastic quasi-Newton algorithm with dynamical sampling and stochastic line search
- A greedy average block sparse Kaczmarz method for sparse solutions of linear systems
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
Uses Software
This page was built for publication: Sample size selection in optimization methods for machine learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q715253)