Sample size selection in optimization methods for machine learning
From MaRDI portal
Publication:715253
DOI10.1007/s10107-012-0572-5zbMath1252.49044WikidataQ105583393 ScholiaQ105583393MaRDI QIDQ715253
Gillian M. Chin, Yuchen Wu, Nocedal, Jorge, Byrd, Richard H.
Publication date: 2 November 2012
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-012-0572-5
65K05: Numerical mathematical programming methods
90C30: Nonlinear programming
68T05: Learning and adaptive systems in artificial intelligence
49M15: Newton-type methods
49M37: Numerical methods based on nonlinear programming
Related Items
Unnamed Item, Adaptive Sampling Strategies for Stochastic Optimization, On Sampling Rates in Simulation-Based Recursions, Stable architectures for deep neural networks, Batched Stochastic Gradient Descent with Weighted Sampling, Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities, Randomized Approach to Nonlinear Inversion Combining Random and Optimized Simultaneous Sources and Detectors, Optimization Methods for Large-Scale Machine Learning, A robust multi-batch L-BFGS method for machine learning, Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise, Adaptive Deep Learning for High-Dimensional Hamilton--Jacobi--Bellman Equations, Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations, Gradient-Based Adaptive Stochastic Search for Simulation Optimization Over Continuous Space, Asynchronous Schemes for Stochastic and Misspecified Potential Games and Nonconvex Optimization, Convergence of Newton-MR under Inexact Hessian Information, A Stochastic Line Search Method with Expected Complexity Analysis, Solving inverse problems using data-driven models, An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration, A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization, Newton-like Method with Diagonal Correction for Distributed Optimization, Extragradient Method with Variance Reduction for Stochastic Variational Inequalities, A Stochastic Quasi-Newton Method for Large-Scale Optimization, An inexact successive quadratic approximation method for L-1 regularized optimization, A family of second-order methods for convex \(\ell _1\)-regularized optimization, Estimating the algorithmic variance of randomized ensembles via the bootstrap, Spectral projected gradient method for stochastic optimization, Convergence of the reweighted \(\ell_1\) minimization algorithm for \(\ell_2-\ell_p\) minimization, Global convergence rate analysis of unconstrained optimization methods based on probabilistic models, Adaptive stochastic approximation algorithm, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Sub-sampled Newton methods, Second-order orthant-based methods with enriched Hessian information for sparse \(\ell _1\)-optimization, Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches, Subsampled nonmonotone spectral gradient methods, Inexact restoration with subsampled trust-region methods for finite-sum minimization, A subspace-accelerated split Bregman method for sparse data recovery with joint \(\ell_1\)-type regularizers, Accelerating deep neural network training with inconsistent stochastic gradient descent, Nonmonotone line search methods with variable sample size, Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization, A second-order method for convex1-regularized optimization with active-set prediction, Deep Learning for Trivial Inverse Problems, Parallel Optimization Techniques for Machine Learning, Algorithms for Kullback--Leibler Approximation of Probability Measures in Infinite Dimensions, Descent direction method with line search for unconstrained optimization in noisy environment
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- An adaptive Monte Carlo algorithm for computing mixed logit estimators
- Variable-number sample-path optimization
- A simulation-based approach to two-stage stochastic programming with recourse
- Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming
- On the Rate of Convergence of Optimal Solutions of Monte Carlo Approximations of Stochastic Programs
- The Sample Average Approximation Method for Stochastic Discrete Optimization
- Accelerated Block-coordinate Relaxation for Regularized Optimization
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds
- A New Active Set Algorithm for Box Constrained Optimization
- Acceleration of Stochastic Approximation by Averaging
- On the Goldstein-Levitin-Polyak gradient projection method
- Sparse Reconstruction by Separable Approximation
- Newton's Method for Large Bound-Constrained Optimization Problems
- Convergence Analysis of Stochastic Algorithms
- De-noising by soft-thresholding
- Optimal Distributed Online Prediction using Mini-Batches
- The conjugate gradient method in extremal problems
- A Stochastic Approximation Method