Sparsity in multiple kernel learning
From MaRDI portal
reproducing kernel Hilbert spacessparsityhigh dimensionalityoracle inequalitymultiple kernel learningrestricted isometry
Asymptotic properties of parametric estimators (62F12) Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Applications of functional analysis in probability theory and statistics (46N30) Inequalities; stochastic orderings (60E15) Nonparametric inference (62G99)
Abstract: The problem of multiple kernel learning based on penalized empirical risk minimization is discussed. The complexity penalty is determined jointly by the empirical norms and the reproducing kernel Hilbert space (RKHS) norms induced by the kernels with a data-driven choice of regularization parameters. The main focus is on the case when the total number of kernels is large, but only a relatively small number of them is needed to represent the target function, so that the problem is sparse. The goal is to establish oracle inequalities for the excess risk of the resulting prediction rule showing that the method is adaptive both to the unknown design distribution and to the sparsity of the problem.
Recommendations
Cites work
- scientific article; zbMATH DE number 2089352 (Why is no real title available?)
- scientific article; zbMATH DE number 49190 (Why is no real title available?)
- A Bennett concentration inequality and its application to suprema of empirical processes
- Component selection and smoothing in multivariate nonparametric regression
- Consistency of the group Lasso and multiple kernel learning
- High-dimensional additive modeling
- Introduction to nonparametric estimation
- Learning Bounds for Support Vector Machines with Learned Kernels
- Learning the kernel function via regularization
- Learning the kernel matrix with semidefinite programming
- New concentration inequalities in product spaces
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse recovery in convex hulls via entropy penalization
- Sparsity in penalized empirical risk minimization
- Statistical performance of support vector machines
- The Dantzig selector and sparsity oracle inequalities
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Theory of Reproducing Kernels
- Weak convergence and empirical processes. With applications to statistics
Cited in
(70)- Kernel Ordinary Differential Equations
- Randomized sketches for kernel CCA
- Locally adaptive sparse additive quantile regression model with TV penalty
- Multikernel regression with sparsity constraint
- Grouping strategies and thresholding for high dimensional linear models
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Rates of contraction with respect to \(L_2\)-distance for Bayesian nonparametric regression
- Randomized multi-scale kernels learning with sparsity constraint regularization for regression
- Estimates on learning rates for multi-penalty distribution regression
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- Block-wise primal-dual algorithms for large-scale doubly penalized ANOVA modeling
- Guaranteed Functional Tensor Singular Value Decomposition
- Minimax optimal estimation in partially linear additive models under high dimension
- On the convergence rate of \(l_{p}\)-norm multiple kernel learning
- Learning non-parametric basis independent models from point queries via low-rank methods
- Multiple Kernel Learning for Sparse Representation-Based Classification
- Metamodel construction for sensitivity analysis
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Learning rates for partially linear support vector machine in high dimensions
- Learning rates for classification with Gaussian kernels
- Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces
- Minimax-optimal nonparametric regression in high dimensions
- Semiparametric regression models with additive nonparametric components and high dimensional parametric components
- Sparse Approximation of a Kernel Mean
- Grouped variable selection with discrete optimization: computational and statistical perspectives
- Error bounds for \(l^p\)-norm multiple kernel learning with least square loss
- scientific article; zbMATH DE number 7329270 (Why is no real title available?)
- Statistical inference in compound functional models
- Extreme eigenvalues of nonlinear correlation matrices with applications to additive models
- PAC-Bayesian estimation and prediction in sparse additive models
- A unified penalized method for sparse additive quantile models: an RKHS approach
- Tight conditions for consistency of variable selection in the context of high dimensionality
- Sparse kernel SVMs via cutting-plane training
- Doubly penalized estimation in additive regression with high-dimensional data
- Variable selection in additive quantile regression using nonconcave penalty
- Learning sparse conditional distribution: an efficient kernel-based approach
- Multiple Kernel Learningの学習理論
- Regularizers for structured sparsity
- Sparse RKHS estimation via globally convex optimization and its application in LPV-IO identification
- Minimax and adaptive prediction for functional linear regression
- DDAC-SpAM: A Distributed Algorithm for Fitting High-dimensional Sparse Additive Models with Feature Division and Decorrelation
- Greedy Kernel Approximation for Sparse Surrogate Modeling
- Multiple spectral kernel learning and a Gaussian complexity computation
- Asymptotically faster estimation of high-dimensional additive models using subspace learning
- scientific article; zbMATH DE number 5957340 (Why is no real title available?)
- Significant vector learning to construct sparse kernel regression models
- Learning general sparse additive models from point queries in high dimensions
- A semiparametric model for matrix regression
- Automatic component selection in additive modeling of French national electricity load forecasting
- Kernel meets sieve: post-regularization confidence bands for sparse additive model
- Kernel Knockoffs Selection for Nonparametric Additive Models
- Regularizing double machine learning in partially linear endogenous models
- Variable sparsity kernel learning
- Inference for high-dimensional varying-coefficient quantile regression
- Improved Estimation of High-dimensional Additive Models Using Subspace Learning
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- The two-sample problem for Poisson processes: adaptive tests with a nonasymptotic wild bootstrap approach
- Approximate nonparametric quantile regression in reproducing kernel Hilbert spaces via random projection
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Information based complexity for high dimensional sparse functions
- Nonlinear Variable Selection via Deep Neural Networks
- Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions
- Nonparametric variable screening for multivariate additive models
- Sparse additive support vector machines in bounded variation space
- Hierarchical Total Variations and Doubly Penalized ANOVA Modeling for Multivariate Nonparametric Regression
- Decentralized learning over a network with Nyström approximation using SGD
- Backfitting algorithms for total-variation and empirical-norm penalized additive modelling with high-dimensional data
- Sparse multiple kernel learning: minimax rates with random projection
- Additive model selection
- Sparsity in penalized empirical risk minimization
This page was built for publication: Sparsity in multiple kernel learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q620564)