Regularization: From Inverse Problems to Large-Scale Machine Learning
From MaRDI portal
Publication:5028166
Cites work
- scientific article; zbMATH DE number 45848 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 204193 (Why is no real title available?)
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- scientific article; zbMATH DE number 893887 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- A distribution-free theory of nonparametric regression
- A mathematical introduction to compressive sensing
- An Iteration Formula for Fredholm Integral Equations of the First Kind
- Boosting With theL2Loss
- Cross-validation based adaptation for regularization operators in learning theory
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Geometric harmonics: a novel tool for multiscale out-of-sample extension of empirical functions
- Learning Theory
- Learning from examples as an inverse problem
- Learning theory estimates via integral operators and their approximations
- Linear integral equations
- Linear inverse problems with discrete data. I. General formulation and singular system analysis
- Manifold regularization: a geometric framework for learning from labeled and unlabeled examples
- Model selection for regularized least-squares algorithm in learning theory
- Nonparametric stochastic approximation with large step-sizes
- On some extensions of Bernstein's inequality for self-adjoint operators
- On the mathematical foundations of learning
- Optimal rates for multi-pass stochastic gradient methods
- Optimal rates for regularization of statistical inverse learning problems
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Optimal rates for the regularized least-squares algorithm
- Optimum bounds for the distributions of martingales in Banach spaces
- Real Analysis and Probability
- Regularization algorithms for learning that are equivalent to multilayer networks
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Shannon sampling and function reconstruction from point values
- Statistical properties of kernel principal component analysis
- Sums and Gaussian vectors
- Support Vector Machines
- Theory of Reproducing Kernels
- User-friendly tail bounds for sums of random matrices
Cited in
(4)
This page was built for publication: Regularization: From Inverse Problems to Large-Scale Machine Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5028166)