Learning from examples as an inverse problem
From MaRDI portal
Publication:3093282
zbMATH Open1222.68180MaRDI QIDQ3093282FDOQ3093282
Authors: Lorenzo Rosasco, Umberto de Giovannini, Francesca Odone, Ernesto De Vito, Andrea Caponnetto
Publication date: 12 October 2011
Full work available at URL: http://www.jmlr.org/papers/v6/devito05a.html
Recommendations
- On regularization algorithms in learning theory
- AN ERROR ANALYSIS OF LAVRENTIEV REGULARIZATION IN LEARNING THEORY
- Regularization and statistical learning theory for data analysis.
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- Neural Network Learning as an Inverse Problem
Learning and adaptive systems in artificial intelligence (68T05) Linear operators and ill-posed problems, regularization (47A52)
Cited In (64)
- Learning particle swarming models from data with Gaussian processes
- Analysis of regularized least squares ranking with centered reproducing kernel
- Learning spectral windowing parameters for regularization using unbiased predictive risk and generalized cross validation techniques for multiple data sets
- Spectral algorithms for learning with dependent observations
- Least squares approximations in linear statistical inverse learning problems
- Convergence Rates for Learning Linear Operators from Noisy Data
- Regularized Nyström Subsampling in Covariate Shift Domain Adaptation Problems
- Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning
- Kernel methods in system identification, machine learning and function estimation: a survey
- A multiscale support vector regression method on spheres with data compression
- A consistent and numerically efficient variable selection method for sparse Poisson regression with applications to learning and signal recovery
- Two-Layer Neural Networks with Values in a Banach Space
- Mini-workshop: Deep learning and inverse problems. Abstracts from the mini-workshop held March 4--10, 2018
- A note on the prediction error of principal component regression in high dimensions
- Title not available (Why is that?)
- Learning theory of distributed spectral algorithms
- Machine learning with kernels for portfolio valuation and risk management
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- Generalized Kalman smoothing: modeling and algorithms
- Ensemble Kalman inversion: a derivative-free technique for machine learning tasks
- Kernel variable selection for multicategory support vector machines
- Thresholded spectral algorithms for sparse approximations
- Smoothed residual stopping for statistical inverse problems via truncated SVD estimation
- Wasserstein-based projections with applications to inverse problems
- Distributed least squares prediction for functional linear regression*
- Error analysis on regularized regression based on the maximum correntropy criterion
- Learning regularization parameters for general-form Tikhonov
- Multi-penalty regularization in learning theory
- Manifold regularization based on Nyström type subsampling
- Feasibility-based fixed point networks
- Convergence rates of kernel conjugate gradient for random design regression
- An elementary analysis of ridge regression with random design
- Kernel partial least squares for stationary data
- Complexity control in statistical learning
- On regularization algorithms in learning theory
- Diffusion maps for changing data
- Optimal rates for regularization of statistical inverse learning problems
- Neural Network Learning as an Inverse Problem
- Efficient regularized least-squares algorithms for conditional ranking on relational data
- Machine learning from examples: Inductive and Lazy methods
- Regularization: From Inverse Problems to Large-Scale Machine Learning
- Title not available (Why is that?)
- Nyström type subsampling analyzed as a regularized projection
- Optimal filters from calibration data for image deconvolution with data acquisition error
- On spectral windows in supervised learning from data
- A study on regularization for discrete inverse problems with model-dependent noise
- Regression learning based on incomplete relationships between attributes
- Multi-output learning via spectral filtering
- Adaptive kernel methods using the balancing principle
- Multi-task learning via linear functional strategy
- Title not available (Why is that?)
- Learning from non-random data in Hilbert spaces: an optimal recovery perspective
- Neural nets learning as an inverse problem
- Sobolev norm learning rates for regularized least-squares algorithms
- On a regularization of unsupervised domain adaptation in RKHS
- Random discretization of the finite Fourier transform and related kernel random matrices
- Statistical performance of support vector machines
- Estimating adsorption isotherm parameters in chromatography via a virtual injection promoting double feed-forward neural network
- Regularization and statistical learning theory for data analysis.
- Representation and reconstruction of covariance operators in linear inverse problems
- Convex regularization in statistical inverse learning problems
- Geometry on probability spaces
- Consistent learning by composite proximal thresholding
This page was built for publication: Learning from examples as an inverse problem
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3093282)