Learning the kernel function via regularization
From MaRDI portal
Publication:3093293
zbMATH Open1222.68265MaRDI QIDQ3093293FDOQ3093293
Authors: Massimiliano Pontil, Charles A. Micchelli
Publication date: 12 October 2011
Full work available at URL: http://www.jmlr.org/papers/v6/micchelli05a.html
Recommendations
Cited In (97)
- Title not available (Why is that?)
- Regularizing multiple kernel learning using response surface methodology
- Bayesian pathway selection
- An algebraic characterization of the optimum of regularized kernel methods
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- SpicyMKL: a fast algorithm for multiple kernel learning with thousands of kernels
- Optimal sampling points in reproducing kernel Hilbert spaces
- Symmetry and antisymmetry properties of optimal solutions to regression problems
- The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary
- Parameter choice strategies for least-squares approximation of noisy smooth functions on the sphere
- An efficient kernel learning algorithm for semisupervised regression problems
- Value regularization and Fenchel duality
- Ideal regularization for learning kernels from labels
- Linearly constrained reconstruction of functions by kernels with applications to machine learning
- Discriminatively regularized least-squares classification
- Online primal-dual learning for a data-dependent multi-kernel combination model with multiclass visual categorization applications
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- Approximation of high-dimensional kernel matrices by multilevel circulant matrices
- Learning the kernel matrix by maximizing a KFD-based class separability criterion
- Least square regression with indefinite kernels and coefficient regularization
- Multi-penalty regularization in learning theory
- Optimization problems in statistical learning: duality and optimality conditions
- Regularization techniques and suboptimal solutions to optimization problems in learning from data
- On extension theorems and their connection to universal consistency in machine learning
- Kernels for linear time invariant system identification
- Error bounds for \(l^p\)-norm multiple kernel learning with least square loss
- Multi-kernel regularized classifiers
- Multi-parameter regularization and its numerical realization
- A linear functional strategy for regularized ranking
- On convergence of kernel learning estimators
- Proximal methods for the latent group lasso penalty
- Another look at linear programming for feature selection via methods of regularization
- Multiple Kernel Learningの学習理論
- Orthogonality from disjoint support in reproducing kernel Hilbert spaces
- The optimal solution of multi-kernel regularization learning
- Learning circulant sensing kernels
- Learning Theory
- Approximation of kernel matrices by circulant matrices and its application in kernel selection methods
- A Bayesian approach to sparse dynamic network identification
- Evolution strategies based adaptive \(L_{p}\) LS-SVM
- Convex optimization in sums of Banach spaces
- Fast learning of relational kernels
- Kernel-based discretization for solving matrix-valued PDEs
- Multilevel augmentation algorithms based on fast collocation methods for solving ill-posed integral equations
- A meta-learning approach to the regularized learning -- case study: blood glucose prediction
- Adaptive kernel methods using the balancing principle
- Distributed parametric and nonparametric regression with on-line performance bounds computation
- Kernels, pre-images and optimization
- Learning with sample dependent hypothesis spaces
- Feature space perspectives for learning the kernel
- Multiscale support vector approach for solving ill-posed problems
- Group online adaptive learning
- Some properties of regularized kernel methods
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- The two-sample problem for Poisson processes: adaptive tests with a nonasymptotic wild bootstrap approach
- Additive regularization trade-off: fusion of training and validation levels in kernel methods
- Learning the coordinate gradients
- Behavior of a functional in learning theory
- The learning rate of \(l_2\)-coefficient regularized classification with strong loss
- Sparsity in multiple kernel learning
- Error bounds of multi-graph regularized semi-supervised classification
- Regularization in kernel learning
- Error bounds for learning the kernel
- Sampling and Stability
- Learning rates of multi-kernel regularized regression
- Random feature-based online multi-kernel learning in environments with unknown dynamics
- Multikernel regression with sparsity constraint
- A multiscale support vector regression method on spheres with data compression
- Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
- About the non-convex optimization problem induced by non-positive semidefinite kernel learning
- Optimal learning
- Learning rates for partially linear support vector machine in high dimensions
- Learning translation invariant kernels for classification
- Data-Driven Kernel Designs for Optimized Greedy Schemes: A Machine Learning Perspective
- Manifold regularization based on Nyström type subsampling
- Improvement of multiple kernel learning using adaptively weighted regularization
- High-speed train localization algorithm via cooperative multi-classifier network using distributed heterogeneous signals
- A high-order norm-product regularized multiple kernel learning framework for kernel optimization
- Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions
- When is there a representer theorem? Reflexive Banach spaces
- Sparse RKHS estimation via globally convex optimization and its application in LPV-IO identification
- Learning with Kernels and Logical Representations
- When is there a representer theorem? Nondifferentiable regularisers and Banach spaces
- Modeling interactive components by coordinate kernel polynomial models
- A convex parametrization of a new class of universal kernel functions
- Balancing principle in supervised learning for a general regularization scheme
- Positive Semi-definite Embedding for Dimensionality Reduction and Out-of-Sample Extensions
- Multi-task learning via linear functional strategy
- Learning with centered reproducing kernels
- Refined Rademacher chaos complexity bounds with applications to the multikernel learning problem
- Learning sets with separating kernels
- Iterative regularization for learning with convex loss functions
- Title not available (Why is that?)
- Sparse multiple kernel learning: minimax rates with random projection
- Infinite-\(\sigma \) limits for Tikhonov regularization
- Learning with optimal interpolation norms
- Classifier learning with a new locality regularization method
This page was built for publication: Learning the kernel function via regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3093293)