Sample size selection in optimization methods for machine learning
From MaRDI portal
(Redirected from Publication:715253)
Recommendations
- On the use of stochastic Hessian information in optimization methods for machine learning
- Large-scale machine learning with stochastic gradient descent
- Adaptive sampling strategies for stochastic optimization
- Adaptive sampling for incremental optimization using stochastic gradient descent
- Survey of solving the optimization problems for sparse learning
Cites work
- scientific article; zbMATH DE number 3229153 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds
- A New Active Set Algorithm for Box Constrained Optimization
- A Stochastic Approximation Method
- A simulation-based approach to two-stage stochastic programming with recourse
- Accelerated block-coordinate relaxation for regularized optimization
- Acceleration of Stochastic Approximation by Averaging
- An adaptive Monte Carlo algorithm for computing mixed logit estimators
- Convergence Analysis of Stochastic Algorithms
- De-noising by soft-thresholding
- Dual averaging methods for regularized stochastic learning and online optimization
- Efficient online and batch learning using forward backward splitting
- Newton's Method for Large Bound-Constrained Optimization Problems
- On the Goldstein-Levitin-Polyak gradient projection method
- On the rate of convergence of optimal solutions of Monte Carlo approximations of stochastic programs
- On the use of stochastic Hessian information in optimization methods for machine learning
- Optimal distributed online prediction using mini-batches
- Primal-dual subgradient methods for convex problems
- Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming
- Sparse Reconstruction by Separable Approximation
- The conjugate gradient method in extremal problems
- The sample average approximation method for stochastic discrete optimization
- Variable-number sample-path optimization
Cited in
(85)- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
- Deep learning for trivial inverse problems
- Robust data sampling in machine learning: a game-theoretic framework for training and validation data selection
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
- A trust region method for noisy unconstrained optimization
- Parallel optimization techniques for machine learning
- A stochastic variance reduced gradient method with adaptive step for stochastic optimization
- Hessian averaging in stochastic Newton methods achieves superlinear convergence
- Adaptive sampling stochastic multigradient algorithm for stochastic multiobjective optimization
- Linesearch Newton-CG methods for convex optimization with noise
- On the use of stochastic Hessian information in optimization methods for machine learning
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Sub-sampled Newton methods
- Quantity optimization of virtual sample generation with two kinds of upper bound conditions
- Convergence of the reweighted \(\ell_1\) minimization algorithm for \(\ell_2-\ell_p\) minimization
- An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- Newton-like method with diagonal correction for distributed optimization
- Variance-based extragradient methods with line search for stochastic variational inequalities
- Nonmonotone line search methods with variable sample size
- Second-order orthant-based methods with enriched Hessian information for sparse \(\ell _1\)-optimization
- A stochastic gradient method with variance control and variable learning rate for deep learning
- Algorithms for Kullback-Leibler approximation of probability measures in infinite dimensions
- Randomized approach to nonlinear inversion combining random and optimized simultaneous sources and detectors
- Optimization methods for large-scale machine learning
- Adaptive deep learning for high-dimensional Hamilton-Jacobi-Bellman equations
- Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations
- Accelerating mini-batch SARAH by step size rules
- Estimating absorption and scattering in quantitative photoacoustic tomography with an adaptive Monte Carlo method for light transport
- The sparse Kaczmarz method with surrogate hyperplane for the regularized basis pursuit problem
- Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches
- Asynchronous schemes for stochastic and misspecified potential games and nonconvex optimization
- Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming
- An inexact successive quadratic approximation method for L-1 regularized optimization
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Subsampled first-order optimization methods with applications in imaging
- First- and second-order high probability complexity bounds for trust-region methods with noisy oracles
- A second-order method for convex \(\ell_1\)-regularized optimization with active-set prediction
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- A line search based proximal stochastic gradient algorithm with dynamical variance reduction
- Subsampled nonmonotone spectral gradient methods
- On sampling rates in simulation-based recursions
- A robust multi-batch L-BFGS method for machine learning
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- Descent direction method with line search for unconstrained optimization in noisy environment
- An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints
- Probability maximization via Minkowski functionals: convex representations and tractable resolution
- Spectral projected gradient method for stochastic optimization
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- Gradient-based optimisation of the conditional-value-at-risk using the multi-level Monte Carlo method
- An overview of stochastic quasi-Newton methods for large-scale machine learning
- Bolstering stochastic gradient descent with model building
- A fully stochastic second-order trust region method
- An investigation of stochastic trust-region based algorithms for finite-sum minimization
- A stochastic quasi-Newton method for large-scale optimization
- Adaptive sampling strategies for stochastic optimization
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians
- A nonmonotone line search method for stochastic optimization problems
- Risk-averse design of tall buildings for uncertain wind conditions
- Statistically equivalent surrogate material models: impact of random imperfections on the elasto-plastic response
- Adaptive stochastic approximation algorithm
- A framework of convergence analysis of mini-batch stochastic projected gradient methods
- A multilevel method for self-concordant minimization
- A proximal stochastic quasi-Newton algorithm with dynamical sampling and stochastic line search
- Stable architectures for deep neural networks
- Gradient-based adaptive stochastic search for simulation optimization over continuous space
- scientific article; zbMATH DE number 6870925 (Why is no real title available?)
- A stochastic line search method with expected complexity analysis
- Accelerating deep neural network training with inconsistent stochastic gradient descent
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Convergence of Newton-MR under inexact Hessian information
- Batched Stochastic Gradient Descent with Weighted Sampling
- Solving inverse problems using data-driven models
- Estimating the algorithmic variance of randomized ensembles via the bootstrap
- Empirical risk minimization: probabilistic complexity and stepsize strategy
- Ritz-like values in steplength selections for stochastic gradient methods
- Inexact restoration with subsampled trust-region methods for finite-sum minimization
- A subspace-accelerated split Bregman method for sparse data recovery with joint \(\ell_1\)-type regularizers
- A count sketch maximal weighted residual Kaczmarz method for solving highly overdetermined linear systems
- A greedy average block sparse Kaczmarz method for sparse solutions of linear systems
- A deep learning semiparametric regression for adjusting complex confounding structures
This page was built for publication: Sample size selection in optimization methods for machine learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q715253)