Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
zbMATH Open1441.62215arXiv1602.05419MaRDI QIDQ4637017FDOQ4637017
Authors: Aymeric Dieuleveut, Nicolas Flammarion, Francis Bach
Publication date: 17 April 2018
Full work available at URL: https://arxiv.org/abs/1602.05419
Recommendations
- Convergence rates of least squares regression estimators with heavy-tailed errors
- scientific article; zbMATH DE number 3992650
- The convergence rate of learning algorithms for least square regression with sample dependent hypothesis spaces
- Optimal strong convergence rates in nonparametric regression
- scientific article; zbMATH DE number 4109874
- On convergence rates of convex regression in multiple dimensions
- Strong convergence rate of the least median absolute estimator in linear regression models
- On the convergence of pseudo-linear regression algorithms
- scientific article; zbMATH DE number 3856232
convex optimizationstochastic gradientaccelerated gradientleast-squares regressionnon-parametric estimation
Computational methods for problems pertaining to statistics (62-08) Density estimation (62G07) Linear regression; mixed models (62J05) Stochastic approximation (62L20)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Title not available (Why is that?)
- Theory of Reproducing Kernels
- Introductory lectures on convex optimization. A basic course.
- Nonparametric stochastic approximation with large step-sizes
- Title not available (Why is that?)
- Support Vector Machines
- Acceleration of Stochastic Approximation by Averaging
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Stochastic Approximation Method
- Introduction to nonparametric estimation
- On early stopping in gradient descent learning
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Optimal rates for the regularized least-squares algorithm
- Smooth Optimization with Approximate Gradient
- First-order methods of smooth convex optimization with inexact oracle
- Title not available (Why is that?)
- Dual averaging methods for regularized stochastic learning and online optimization
- An optimal method for stochastic composite optimization
- Performance of empirical risk minimization in linear aggregation
- Online gradient descent learning algorithms
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Random design analysis of ridge regression
- Model selection for regularized least-squares algorithm in learning theory
- Some methods of speeding up the convergence of iteration methods
- An alternative point of view on Lepski's method
- The lower tail of random quadratic forms with applications to ordinary least squares
- Optimal distributed online prediction using mini-batches
- On the Averaged Stochastic Approximation for Linear Regression
- Optimal rates for multi-pass stochastic gradient methods
- Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
Cited In (18)
- Some limit properties of Markov chains induced by recursive stochastic algorithms
- Adaptivity of stochastic gradient methods for nonconvex optimization
- On stochastic accelerated gradient with convergence rate of regression learning
- On the rates of convergence of parallelized averaged stochastic gradient algorithms
- Title not available (Why is that?)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Concentration bounds for temporal difference learning with linear function approximation: the case of batch data and uniform sampling
- Generalization properties of doubly stochastic learning algorithms
- Dual space preconditioning for gradient descent
- Finite impulse response models: a non-asymptotic analysis of the least squares estimator
- From inexact optimization to learning via gradient concentration
- Title not available (Why is that?)
- On the convergence of pseudo-linear regression algorithms
- On the adaptivity of stochastic gradient-based optimization
- Bridging the gap between constant step size stochastic gradient descent and Markov chains
- Nonparametric stochastic approximation with large step-sizes
- Memory-sample tradeoffs for linear regression with small error
- Dimension independent excess risk by stochastic gradient descent
Uses Software
This page was built for publication: Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4637017)