Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
DOI10.1214/13-AOS1095zbMATH Open1273.62090arXiv1203.0565MaRDI QIDQ366980FDOQ366980
Authors: Taiji Suzuki, Masashi Sugiyama
Publication date: 25 September 2013
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1203.0565
Recommendations
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- Learning rates of multitask kernel methods
- Sparsity in multiple kernel learning
- Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions
- Learning rates for multi-kernel linear programming classifiers
- Learning rates of multi-kernel regularized regression
- On the convergence rate of \(l_{p}\)-norm multiple kernel learning
- Optimal learning rates for kernel partial least squares
- On multiple kernel learning methods
- An efficient multiple kernel learning in reproducing kernel Hilbert spaces (RKHS)
reproducing kernel Hilbert spacesadditive modelconvergence ratesmoothnesssparse learningelastic-netrestricted isometry
Asymptotic properties of parametric estimators (62F12) Nonparametric regression and quantile regression (62G08) Convex programming (90C25) Ridge regression; shrinkage estimators (Lasso) (62J07) Applications of functional analysis in probability theory and statistics (46N30)
Cites Work
- High-dimensional additive modeling
- Weak convergence and empirical processes. With applications to statistics
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Simultaneous analysis of Lasso and Dantzig selector
- Regularization and Variable Selection Via the Elastic Net
- Some results on Tchebycheffian spline functions and stochastic processes
- An introduction to support vector machines and other kernel-based learning methods.
- Support Vector Machines
- The Group Lasso for Logistic Regression
- Minimax-optimal rates for sparse additive models over kernel classes via convex programming
- Sparsity in multiple kernel learning
- Consistency of the group Lasso and multiple kernel learning
- Title not available (Why is that?)
- Title not available (Why is that?)
- Optimal rates for the regularized least-squares algorithm
- Learning the kernel matrix with semidefinite programming
- Title not available (Why is that?)
- Learning the kernel function via regularization
- Choosing multiple parameters for support vector machines
- Eigenvalues of integral operators defined by smooth positive definite kernels
- Algorithms for learning kernels based on centered alignment
- Title not available (Why is that?)
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- Function Classes That Approximate the Bayes Risk
- Learning Bounds for Support Vector Machines with Learned Kernels
- On the convergence rate of \(l_{p}\)-norm multiple kernel learning
Cited In (27)
- Locally adaptive sparse additive quantile regression model with TV penalty
- Statistical inference in sparse high-dimensional additive models
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- Minimax optimal estimation in partially linear additive models under high dimension
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces
- Grouped variable selection with discrete optimization: computational and statistical perspectives
- Extreme eigenvalues of nonlinear correlation matrices with applications to additive models
- PAC-Bayesian estimation and prediction in sparse additive models
- Doubly penalized estimation in additive regression with high-dimensional data
- Multiple Kernel Learningの学習理論
- Title not available (Why is that?)
- Asymptotically faster estimation of high-dimensional additive models using subspace learning
- High-Dimensional Feature Selection by Feature-Wise Kernelized Lasso
- A semiparametric model for matrix regression
- Title not available (Why is that?)
- Regularized learning schemes in feature Banach spaces
- Distributed learning for sketched kernel regression
- Improved Estimation of High-dimensional Additive Models Using Subspace Learning
- Approximate nonparametric quantile regression in reproducing kernel Hilbert spaces via random projection
- Sparse high-dimensional semi-nonparametric quantile regression in a reproducing kernel Hilbert space
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- Sparse additive support vector machines in bounded variation space
- Decentralized learning over a network with Nyström approximation using SGD
- Sparse multiple kernel learning: minimax rates with random projection
- Additive model selection
Uses Software
This page was built for publication: Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q366980)