SGD-QN: careful quasi-Newton stochastic gradient descent
zbMATH Open1235.68130MaRDI QIDQ2880947FDOQ2880947
Authors: Antoine Bordes, Léon Bottou, Patrick Gallinari
Publication date: 17 April 2012
Published in: Journal of Machine Learning Research (JMLR) (Search for Journal in Brave)
Full work available at URL: http://www.jmlr.org/papers/v10/bordes09a.html
Recommendations
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Ridge regression; shrinkage estimators (Lasso) (62J07)
Cited In (37)
- ASD+M: automatic parameter tuning in stochastic optimization and on-line learning
- Large-scale machine learning with stochastic gradient descent
- A class of parallel doubly stochastic algorithms for large-scale learning
- An online AUC formulation for binary classification
- Quasi-Newton methods for machine learning: forget the past, just sample
- QNG: a quasi-natural gradient method for large-scale statistical learning
- Stochastic quasi-Newton with line-search regularisation
- Optimization methods for large-scale machine learning
- Stochastic subgradient descent method for large-scale robust chance-constrained support vector machines
- A globally convergent incremental Newton method
- Title not available (Why is that?)
- Stochastic gradient descent with Barzilai-Borwein update step for SVM
- An adaptive Hessian approximated stochastic gradient MCMC method
- Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications
- Scalable estimation strategies based on stochastic approximations: classical results and new insights
- Real-Time Regression Analysis of Streaming Clustered Data With Possible Abnormal Data Batches
- Robust adaptive learning of feedforward neural networks via LMI optimizations
- Hierarchical linear support vector machine
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- A robust multi-batch L-BFGS method for machine learning
- Quasi-Newton smoothed functional algorithms for unconstrained and constrained simulation optimization
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- On stochastic and deterministic quasi-Newton methods for nonstrongly convex optimization: asymptotic convergence and rate analysis
- Erratum: SGDQN is less careful than expected
- Kalman-based stochastic gradient method with stop condition and insensitivity to conditioning
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- An overview of stochastic quasi-Newton methods for large-scale machine learning
- Parsimonious online learning with kernels via sparse projections in function space
- Online renewable smooth quantile regression
- A stochastic quasi-Newton method for large-scale optimization
- A fast SVD-hidden-nodes based extreme learning machine for large-scale data analytics
- Approximating Hessian matrices using Bayesian inference: a new approach for quasi-Newton methods in stochastic optimization
- Periodic step-size adaptation in second-order gradient descent for single-pass on-line structured learning
- Minimizing finite sums with the stochastic average gradient
- Optimization for deep learning: an overview
- SABRINA: a stochastic subspace majorization-minimization algorithm
Uses Software
This page was built for publication: SGD-QN: careful quasi-Newton stochastic gradient descent
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2880947)