Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
From MaRDI portal
Publication:2934003
zbMath1318.62224arXiv1303.6149MaRDI QIDQ2934003
Publication date: 8 December 2014
Full work available at URL: https://arxiv.org/abs/1303.6149
Related Items (31)
Lp and almost sure rates of convergence of averaged stochastic gradient algorithms: locally strongly convex objective ⋮ Some Limit Properties of Markov Chains Induced by Recursive Stochastic Algorithms ⋮ Inversion-free subsampling Newton's method for large sample logistic regression ⋮ Streaming constrained binary logistic regression with online standardized data ⋮ Unnamed Item ⋮ Approximate maximum likelihood estimation for population genetic inference ⋮ Semi-discrete optimal transport: hardness, regularization and numerical solution ⋮ Convergence of the exponentiated gradient method with Armijo line search ⋮ Optimal non-asymptotic analysis of the Ruppert-Polyak averaging stochastic algorithm ⋮ Adaptive step size rules for stochastic optimization in large-scale learning ⋮ Unnamed Item ⋮ Composite Convex Minimization Involving Self-concordant-Like Cost Functions ⋮ On the rates of convergence of parallelized averaged stochastic gradient algorithms ⋮ Bridging the gap between constant step size stochastic gradient descent and Markov chains ⋮ Convergence in quadratic mean of averaged stochastic gradient algorithms without strong convexity nor bounded gradient ⋮ Stochastic heavy ball ⋮ Finite-sample analysis of \(M\)-estimators using self-concordance ⋮ On variance reduction for stochastic smooth convex optimization with multiplicative noise ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization ⋮ Unnamed Item ⋮ When will gradient methods converge to max-margin classifier under ReLU models? ⋮ Asymptotic distribution and convergence rates of stochastic algorithms for entropic optimal transportation between probability measures ⋮ An efficient averaged stochastic Gauss-Newton algorithm for estimating parameters of nonlinear regressions models ⋮ An Efficient Stochastic Newton Algorithm for Parameter Estimation in Logistic Regressions ⋮ A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares) ⋮ Online estimation of the asymptotic variance for averaged stochastic gradient algorithms ⋮ Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression ⋮ Convergence Rate of Incremental Gradient and Incremental Newton Methods ⋮ Generalized self-concordant functions: a recipe for Newton-type methods
This page was built for publication: Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression