Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression

From MaRDI portal
Revision as of 20:15, 3 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2934003

zbMath1318.62224arXiv1303.6149MaRDI QIDQ2934003

Francis Bach

Publication date: 8 December 2014

Full work available at URL: https://arxiv.org/abs/1303.6149






Related Items (31)

Lp and almost sure rates of convergence of averaged stochastic gradient algorithms: locally strongly convex objectiveSome Limit Properties of Markov Chains Induced by Recursive Stochastic AlgorithmsInversion-free subsampling Newton's method for large sample logistic regressionStreaming constrained binary logistic regression with online standardized dataUnnamed ItemApproximate maximum likelihood estimation for population genetic inferenceSemi-discrete optimal transport: hardness, regularization and numerical solutionConvergence of the exponentiated gradient method with Armijo line searchOptimal non-asymptotic analysis of the Ruppert-Polyak averaging stochastic algorithmAdaptive step size rules for stochastic optimization in large-scale learningUnnamed ItemComposite Convex Minimization Involving Self-concordant-Like Cost FunctionsOn the rates of convergence of parallelized averaged stochastic gradient algorithmsBridging the gap between constant step size stochastic gradient descent and Markov chainsConvergence in quadratic mean of averaged stochastic gradient algorithms without strong convexity nor bounded gradientStochastic heavy ballFinite-sample analysis of \(M\)-estimators using self-concordanceOn variance reduction for stochastic smooth convex optimization with multiplicative noiseUnnamed ItemUnnamed ItemStochastic Quasi-Newton Methods for Nonconvex Stochastic OptimizationUnnamed ItemWhen will gradient methods converge to max-margin classifier under ReLU models?Asymptotic distribution and convergence rates of stochastic algorithms for entropic optimal transportation between probability measuresAn efficient averaged stochastic Gauss-Newton algorithm for estimating parameters of nonlinear regressions modelsAn Efficient Stochastic Newton Algorithm for Parameter Estimation in Logistic RegressionsA Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)Online estimation of the asymptotic variance for averaged stochastic gradient algorithmsHarder, Better, Faster, Stronger Convergence Rates for Least-Squares RegressionConvergence Rate of Incremental Gradient and Incremental Newton MethodsGeneralized self-concordant functions: a recipe for Newton-type methods







This page was built for publication: Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression