Nonparametric stochastic approximation with large step-sizes
DOI10.1214/15-AOS1391zbMATH Open1346.60041arXiv1408.0361OpenAlexW2964198904MaRDI QIDQ309706FDOQ309706
Authors: Aymeric Dieuleveut, Francis Bach
Publication date: 7 September 2016
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1408.0361
Recommendations
- On convergence of kernel learning estimators
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Learning rates of least-square regularized regression
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
reproducing kernel Hilbert spaceleast-squares regression problemnonparametric stochastic approximation
Nonparametric regression and quantile regression (62G08) Limit theorems in probability theory (60F99) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Cites Work
- The Forgetron: A Kernel-Based Perceptron on a Budget
- Title not available (Why is that?)
- Title not available (Why is that?)
- Theory of Reproducing Kernels
- Some results on Tchebycheffian spline functions and stochastic processes
- An introduction to support vector machines and other kernel-based learning methods.
- Nonparametric stochastic approximation with large step-sizes
- Title not available (Why is that?)
- Universality, Characteristic Kernels and RKHS Embedding of Measures
- Title not available (Why is that?)
- A Stochastic Approximation Method
- Introduction to nonparametric estimation
- Boosting with early stopping: convergence and consistency
- On early stopping in gradient descent learning
- On the mathematical foundations of learning
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Randomized Algorithms for Matrices and Data
- Optimal rates for the regularized least-squares algorithm
- Title not available (Why is that?)
- Online learning and online convex optimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Universal kernels
- Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization
- Learning theory estimates via integral operators and their approximations
- Online Learning with Kernels
- Online gradient descent learning algorithms
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Early stopping and non-parametric regression: an optimal data-dependent stopping rule
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Fast kernel classifiers with online and active learning
- A new concentration result for regularized risk minimizers
- Random design analysis of ridge regression
- Title not available (Why is that?)
- Model selection for regularized least-squares algorithm in learning theory
Cited In (47)
- An analysis of stochastic variance reduced gradient for linear inverse problems *
- Consistent change-point detection with kernels
- New efficient algorithms for multiple change-point detection with reproducing kernels
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- A kernel multiple change-point algorithm via model selection
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- On the regularizing property of stochastic gradient descent
- Stochastic subspace correction in Hilbert space
- Fast and strong convergence of online learning algorithms
- A sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoids
- Efficient mini-batch stochastic gradient descent with centroidal Voronoi tessellation for PDE-constrained optimization under uncertainty
- A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)
- Streaming kernel regression with provably adaptive mean, variance, and regularization
- Ensemble Kalman inversion: a derivative-free technique for machine learning tasks
- Capacity dependent analysis for functional online learning algorithms
- Distribution-free robust linear regression
- Stochastic subspace correction methods and fault tolerance
- Biparametric identification for a free boundary of ductal carcinoma in situ
- Title not available (Why is that?)
- Optimal rates for multi-pass stochastic gradient methods
- An elementary analysis of ridge regression with random design
- Convergence of unregularized online learning algorithms
- Title not available (Why is that?)
- Unregularized online algorithms with varying Gaussians
- From inexact optimization to learning via gradient concentration
- Online regularized learning algorithm for functional data
- High probability bounds for stochastic subgradient schemes with heavy tailed noise]
- Approximate maximum likelihood estimation for population genetic inference
- Regularization: From Inverse Problems to Large-Scale Machine Learning
- Differentially private SGD with non-smooth losses
- Complexity Analysis of stochastic gradient methods for PDE-constrained optimal Control Problems with uncertain parameters
- Parsimonious online learning with kernels via sparse projections in function space
- Optimality of robust online learning
- Bridging the gap between constant step size stochastic gradient descent and Markov chains
- Convergence rates of gradient methods for convex optimization in the space of measures
- Uncertainty quantification for stochastic approximation limits using chaos expansion
- Nonparametric stochastic approximation with large step-sizes
- Distributed SGD in overparametrized linear regression
- Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces
- Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
- Optimal indirect estimation for linear inverse problems with discretely sampled functional data
- Sparse online regression algorithm with insensitive loss functions
- Differentially private SGD with random features
- Dimension independent excess risk by stochastic gradient descent
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- An Online Projection Estimator for Nonparametric Regression in Reproducing Kernel Hilbert Spaces
- Ivanov-regularised least-squares estimators over large RKHSs and their interpolation spaces
Uses Software
This page was built for publication: Nonparametric stochastic approximation with large step-sizes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q309706)