Learning with generalization capability by kernel methods of bounded complexity
From MaRDI portal
Publication:558012
DOI10.1016/j.jco.2004.11.002zbMath1095.68044OpenAlexW2119028526MaRDI QIDQ558012
Marcello Sanguineti, Vera Kurková
Publication date: 30 June 2005
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: http://www.nusl.cz/ntk/nusl-34137
Kernel methodsGeneralizationMinimization of regularized empirical errorsModel complexitySupervised learningUpper bounds on rates of approximate optimization
Computational learning theory (68Q32) Sampling theory, sample surveys (62D05) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Accuracy of suboptimal solutions to kernel principal component analysis ⋮ Learning with Boundary Conditions ⋮ On spectral windows in supervised learning from data ⋮ Radial fuzzy systems ⋮ Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data ⋮ Functional optimal estimation problems and their solution by nonlinear approximation schemes ⋮ A recursive algorithm for nonlinear least-squares problems ⋮ New insights into Witsenhausen's counterexample ⋮ Regularized vector field learning with sparse approximation for mismatch removal ⋮ Estimates of variation with respect to a set and applications to optimization problems ⋮ Management of water resource systems in the presence of uncertainties by nonlinear approximation techniques and deterministic sampling ⋮ The weight-decay technique in learning from data: an optimization point of view ⋮ Unnamed Item ⋮ Power series kernels ⋮ Estimates of the approximation error using Rademacher complexity: Learning vector-valued functions ⋮ Rates of minimization of error functionals over Boolean variable-basis functions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Some remarks on the condition number of a real random square matrix
- Well-posed optimization problems
- The geometry of ill-conditioning
- On uniformly convex functionals
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- Perturbations, approximations and sensitivity analysis of optimal control systems
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Metric spaces and completely monontone functions
- On the mathematical foundations of learning
- Regularization Algorithms for Learning That Are Equivalent to Multilayer Networks
- An Approach to Time Series Analysis
- Universal approximation bounds for superpositions of a sigmoidal function
- Bounds on rates of variable-basis and neural-network approximation
- Comparison of worst case errors in linear and neural network approximation
- Error Estimates for Approximate Optimization by the Extended Ritz Method
- A Correspondence Between Bayesian Estimation on Stochastic Processes and Smoothing by Splines
- Theory of Reproducing Kernels
This page was built for publication: Learning with generalization capability by kernel methods of bounded complexity