Regularization: From Inverse Problems to Large-Scale Machine Learning
From MaRDI portal
Publication:5028166
DOI10.1007/978-3-030-86664-8_5OpenAlexW4205328879MaRDI QIDQ5028166FDOQ5028166
Alessandro Rudi, Lorenzo Rosasco, Ernesto De Vito
Publication date: 8 February 2022
Published in: Harmonic and Applied Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-030-86664-8_5
Cites Work
- Title not available (Why is that?)
- Regularization Algorithms for Learning That Are Equivalent to Multilayer Networks
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Theory of Reproducing Kernels
- Nonparametric stochastic approximation with large step-sizes
- Learning Theory
- Support Vector Machines
- Title not available (Why is that?)
- Real Analysis and Probability
- On the mathematical foundations of learning
- Title not available (Why is that?)
- Boosting With theL2Loss
- Title not available (Why is that?)
- A distribution-free theory of nonparametric regression
- User-friendly tail bounds for sums of random matrices
- Optimal rates for the regularized least-squares algorithm
- A mathematical introduction to compressive sensing
- Title not available (Why is that?)
- An Iteration Formula for Fredholm Integral Equations of the First Kind
- Geometric harmonics: a novel tool for multiscale out-of-sample extension of empirical functions
- Title not available (Why is that?)
- Shannon sampling and function reconstruction from point values
- Linear integral equations
- Learning theory estimates via integral operators and their approximations
- Title not available (Why is that?)
- Title not available (Why is that?)
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Model selection for regularized least-squares algorithm in learning theory
- Sums and Gaussian vectors
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Optimum bounds for the distributions of martingales in Banach spaces
- Statistical properties of kernel principal component analysis
- Linear inverse problems with discrete data. I. General formulation and singular system analysis
- Optimal rates for regularization of statistical inverse learning problems
- On some extensions of Bernstein's inequality for self-adjoint operators
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Optimal Rates for Multi-pass Stochastic Gradient Methods
Cited In (4)
This page was built for publication: Regularization: From Inverse Problems to Large-Scale Machine Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5028166)