Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
From MaRDI portal
Publication:366980
DOI10.1214/13-AOS1095zbMath1273.62090arXiv1203.0565MaRDI QIDQ366980
Taiji Suzuki, Masashi Sugiyama
Publication date: 25 September 2013
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1203.0565
reproducing kernel Hilbert spacesconvergence ratesmoothnessadditive modelsparse learningelastic-netrestricted isometry
Asymptotic properties of parametric estimators (62F12) Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Convex programming (90C25) Applications of functional analysis in probability theory and statistics (46N30)
Related Items
Statistical inference in sparse high-dimensional additive models, A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model, Improved Estimation of High-dimensional Additive Models Using Subspace Learning, Extreme eigenvalues of nonlinear correlation matrices with applications to additive models, Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies, Multiple Kernel Learningの学習理論, Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness, Grouped variable selection with discrete optimization: computational and statistical perspectives, Distributed learning for sketched kernel regression, Regularized learning schemes in feature Banach spaces, Decentralized learning over a network with Nyström approximation using SGD, PAC-Bayesian estimation and prediction in sparse additive models, Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces, Minimax optimal estimation in partially linear additive models under high dimension, Unnamed Item, Additive model selection, Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space, A semiparametric model for matrix regression, Approximate nonparametric quantile regression in reproducing kernel Hilbert spaces via random projection, High-Dimensional Feature Selection by Feature-Wise Kernelized Lasso, Sparse high-dimensional semi-nonparametric quantile regression in a reproducing kernel Hilbert space, Doubly penalized estimation in additive regression with high-dimensional data, Unnamed Item
Uses Software
Cites Work
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- Sparsity in multiple kernel learning
- Eigenvalues of integral operators defined by smooth positive definite kernels
- High-dimensional additive modeling
- Weak convergence and empirical processes. With applications to statistics
- Optimal rates for the regularized least-squares algorithm
- Simultaneous analysis of Lasso and Dantzig selector
- Some results on Tchebycheffian spline functions and stochastic processes
- Support Vector Machines
- The Group Lasso for Logistic Regression
- Function Classes That Approximate the Bayes Risk
- Learning Bounds for Support Vector Machines with Learned Kernels
- Regularization and Variable Selection Via the Elastic Net
- Minimax-optimal rates for sparse additive models over kernel classes via convex programming
- Choosing multiple parameters for support vector machines
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item