Adaptive sampling for incremental optimization using stochastic gradient descent
From MaRDI portal
Publication:2835640
Recommendations
- A heuristic adaptive fast gradient method in stochastic optimization problems
- Adaptive subgradient methods for online learning and stochastic optimization
- Large-scale machine learning with stochastic gradient descent
- Adaptive sampling strategies for stochastic optimization
- Optimal survey schemes for stochastic gradient descent with applications to \(M\)-estimation
Cites work
- scientific article; zbMATH DE number 3954145 (Why is no real title available?)
- scientific article; zbMATH DE number 1569102 (Why is no real title available?)
- A Stochastic Approximation Method
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Introductory lectures on convex optimization. A basic course.
- Minimizing finite sums with the stochastic average gradient
- Robust Stochastic Approximation Approach to Stochastic Programming
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
- Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing
Cited in
(24)- Stochastic algorithm for optimization and statistical learning
- Stochastic gradient descent in continuous time: a central limit theorem
- Efficient distance metric learning by adaptive sampling and mini-batch stochastic gradient descent (SGD)
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
- Statistical inference for model parameters in stochastic gradient descent
- Auxiliary Gradient-Based Sampling Algorithms
- An improvement of stochastic gradient descent approach for mean-variance portfolio optimization problem
- Adaptive subgradient methods for online learning and stochastic optimization
- Stochastic gradient descent for linear systems with missing data
- A heuristic adaptive fast gradient method in stochastic optimization problems
- Sample size selection in optimization methods for machine learning
- Optimal survey schemes for stochastic gradient descent with applications to \(M\)-estimation
- Batched Stochastic Gradient Descent with Weighted Sampling
- Statistical inference for the population landscape via moment-adjusted stochastic gradients
- Bolstering stochastic gradient descent with model building
- Stochastic gradient descent: where optimization meets machine learning
- Adaptive methods using element-wise \(p\)th power of stochastic gradient for nonconvex optimization in deep neural networks
- Machine Learning: ECML 2004
- Adaptive infinite dropout for noisy and sparse data streams
- SGDLibrary: a MATLAB library for stochastic optimization algorithms
- Constrained and composite optimization via adaptive sampling methods
- Large-scale machine learning with stochastic gradient descent
- Adaptive sampling strategies for stochastic optimization
- Adaptive sequential machine learning
This page was built for publication: Adaptive sampling for incremental optimization using stochastic gradient descent
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2835640)