Learning with generalization capability by kernel methods of bounded complexity
From MaRDI portal
Publication:558012
DOI10.1016/j.jco.2004.11.002zbMath1095.68044MaRDI QIDQ558012
Vera Kurková, Marcello Sanguineti
Publication date: 30 June 2005
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: http://www.nusl.cz/ntk/nusl-34137
Kernel methods; Generalization; Minimization of regularized empirical errors; Model complexity; Supervised learning; Upper bounds on rates of approximate optimization
68Q32: Computational learning theory
62D05: Sampling theory, sample surveys
68T05: Learning and adaptive systems in artificial intelligence
Related Items
Learning with Boundary Conditions, New insights into Witsenhausen's counterexample, Management of water resource systems in the presence of uncertainties by nonlinear approximation techniques and deterministic sampling, Power series kernels, Accuracy of suboptimal solutions to kernel principal component analysis, Estimates of variation with respect to a set and applications to optimization problems, Estimates of the approximation error using Rademacher complexity: Learning vector-valued functions, The weight-decay technique in learning from data: an optimization point of view, Functional optimal estimation problems and their solution by nonlinear approximation schemes, A recursive algorithm for nonlinear least-squares problems, Rates of minimization of error functionals over Boolean variable-basis functions, Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Some remarks on the condition number of a real random square matrix
- Well-posed optimization problems
- The geometry of ill-conditioning
- On uniformly convex functionals
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- Perturbations, approximations and sensitivity analysis of optimal control systems
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Metric spaces and completely monontone functions
- On the mathematical foundations of learning
- Regularization Algorithms for Learning That Are Equivalent to Multilayer Networks
- An Approach to Time Series Analysis
- Universal approximation bounds for superpositions of a sigmoidal function
- Bounds on rates of variable-basis and neural-network approximation
- Comparison of worst case errors in linear and neural network approximation
- Error Estimates for Approximate Optimization by the Extended Ritz Method
- A Correspondence Between Bayesian Estimation on Stochastic Processes and Smoothing by Splines
- Theory of Reproducing Kernels