Adaptive sampling for incremental optimization using stochastic gradient descent
DOI10.1007/978-3-319-24486-0_21zbMATH Open1471.68222OpenAlexW2294540259MaRDI QIDQ2835640FDOQ2835640
Authors: Guillaume Papa, Pascal Bianchi, Stephan Clémençon
Publication date: 30 November 2016
Published in: Lecture Notes in Computer Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-319-24486-0_21
Recommendations
- A heuristic adaptive fast gradient method in stochastic optimization problems
- Adaptive subgradient methods for online learning and stochastic optimization
- Large-scale machine learning with stochastic gradient descent
- Adaptive sampling strategies for stochastic optimization
- Optimal survey schemes for stochastic gradient descent with applications to \(M\)-estimation
Learning and adaptive systems in artificial intelligence (68T05) Approximation methods and heuristics in mathematical programming (90C59) Stochastic approximation (62L20)
Cites Work
- Title not available (Why is that?)
- Introductory lectures on convex optimization. A basic course.
- A Stochastic Approximation Method
- Robust Stochastic Approximation Approach to Stochastic Programming
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
- Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing
- Title not available (Why is that?)
- Minimizing finite sums with the stochastic average gradient
Cited In (20)
- Stochastic gradient descent for linear systems with missing data
- Adaptive infinite dropout for noisy and sparse data streams
- Large-scale machine learning with stochastic gradient descent
- Statistical inference for the population landscape via moment-adjusted stochastic gradients
- Adaptive sequential machine learning
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
- Adaptive methods using element-wise \(p\)th power of stochastic gradient for nonconvex optimization in deep neural networks
- Machine Learning: ECML 2004
- Stochastic algorithm for optimization and statistical learning
- Statistical inference for model parameters in stochastic gradient descent
- An improvement of stochastic gradient descent approach for mean-variance portfolio optimization problem
- Adaptive sampling strategies for stochastic optimization
- Auxiliary Gradient-Based Sampling Algorithms
- Optimal survey schemes for stochastic gradient descent with applications to \(M\)-estimation
- Batched Stochastic Gradient Descent with Weighted Sampling
- Sample size selection in optimization methods for machine learning
- Adaptive subgradient methods for online learning and stochastic optimization
- A heuristic adaptive fast gradient method in stochastic optimization problems
- SGDLibrary: a MATLAB library for stochastic optimization algorithms
- Stochastic gradient descent in continuous time: a central limit theorem
Uses Software
This page was built for publication: Adaptive sampling for incremental optimization using stochastic gradient descent
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2835640)