Consistency of the group Lasso and multiple kernel learning
From MaRDI portal
Publication:3096148
zbMATH Open1225.68147MaRDI QIDQ3096148FDOQ3096148
Authors: Francis Bach
Publication date: 8 November 2011
Full work available at URL: http://www.jmlr.org/papers/v9/bach08b.html
Recommendations
Cited In (only showing first 100 items - show all)
- Regularizing multiple kernel learning using response surface methodology
- Sparsity with sign-coherent groups of variables via the cooperative-Lasso
- The benefit of group sparsity
- Grouping strategies and thresholding for high dimensional linear models
- On the oracle property of adaptive group Lasso in high-dimensional linear models
- Learning rates for the risk of kernel-based quantile regression estimators in additive models
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- SpicyMKL: a fast algorithm for multiple kernel learning with thousands of kernels
- Group selection in high-dimensional partially linear additive models
- Robust classification using \(\ell _{2,1}\)-norm based regression model
- Penalized estimation in additive varying coefficient models using grouped regularization
- Structured variable selection via prior-induced hierarchical penalty functions
- Covariate-adjusted tensor classification in high dimensions
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- The degrees of freedom of partly smooth regularizers
- Low complexity regularization of linear inverse problems
- Theoretical properties of the overlapping groups Lasso
- Learning causal networks via additive faithfulness
- Comprehensive comparative analysis and identification of RNA-binding protein domains: multi-class classification and feature selection
- Learning sparse gradients for variable selection and dimension reduction
- Title not available (Why is that?)
- On a nonlinear extension of the principal fitted component model
- On extension theorems and their connection to universal consistency in machine learning
- Transductive versions of the Lasso and the Dantzig selector
- On the asymptotic properties of the group lasso estimator for linear models
- An unexpected connection between Bayes \(A\)-optimal designs and the group Lasso
- Proximal methods for the latent group lasso penalty
- On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems
- Multiple Kernel Learningの学習理論
- Trace regression model with simultaneously low rank and row(column) sparse parameter
- Structured variable selection with sparsity-inducing norms
- Sparse high-dimensional varying coefficient model: nonasymptotic minimax study
- OR forum: An algorithmic approach to linear regression
- A Bayesian approach to sparse dynamic network identification
- Lasso in Infinite dimension: application to variable selection in functional multivariate linear regression
- On proximal gradient method for the convex problems regularized with the group reproducing kernel norm
- Self-concordant analysis for logistic regression
- The benefit of group sparsity in group inference with de-biased scaled group Lasso
- On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization
- Copula Gaussian Graphical Models for Functional Data
- Sparse hierarchical regression with polynomials
- Sampling from non-smooth distributions through Langevin diffusion
- Nonparametric and high-dimensional functional graphical models
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- The two-sample problem for Poisson processes: adaptive tests with a nonasymptotic wild bootstrap approach
- Error variance estimation in ultrahigh-dimensional additive models
- Accuracy guaranties for \(\ell_{1}\) recovery of block-sparse signals
- Robust inference on average treatment effects with possibly more covariates than observations
- Learning the coordinate gradients
- Efficient block-coordinate descent algorithms for the group Lasso
- Structured sparsity through convex optimization
- Support union recovery in high-dimensional multivariate regression
- Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions
- Sparsity in multiple kernel learning
- A selective review of group selection in high-dimensional models
- Oracle inequalities and optimal inference under group sparsity
- Large-scale multivariate sparse regression with applications to UK Biobank
- Bridge regression: adaptivity and group selection
- Random feature-based online multi-kernel learning in environments with unknown dynamics
- Dynamic networks with multi-scale temporal structure
- Consistent group selection with Bayesian high dimensional modeling
- Variable selection in nonparametric additive models
- High-dimensional regression with unknown variance
- Title not available (Why is that?)
- Random forest-based approach for physiological functional variable selection for driver's stress level classification
- Multikernel regression with sparsity constraint
- High-dimensional grouped folded concave penalized estimation via the LLA algorithm
- Improving localized multiple kernel learning via radius-margin bound
- Improving the prediction performance of the Lasso by subtracting the additive structural noises
- Joint sparse optimization: lower-order regularization method and application in cell fate conversion
- Active-set based block coordinate descent algorithm in group LASSO for self-exciting threshold autoregressive model
- Rate optimal estimation and confidence intervals for high-dimensional regression with missing covariates
- Logistic regression: from art to science
- Network classification with applications to brain connectomics
- Estimating sparse networks with hubs
- Sharp oracle inequalities for low-complexity priors
- Grouped variable selection with discrete optimization: computational and statistical perspectives
- HARFE: hard-ridge random feature expansion
- Analytic center cutting plane method for multiple kernel learning
- Equivalent Lipschitz surrogates for zero-norm and rank optimization problems
- Improvement of multiple kernel learning using adaptively weighted regularization
- Model selection with low complexity priors
- Locally Sparse Function-on-Function Regression
- Local linear convergence of proximal coordinate descent algorithm
- Bayesian mixed effect atlas estimation with a diffeomorphic deformation model
- Efficient functional Lasso kernel smoothing for high-dimensional additive regression
- Sparse RKHS estimation via globally convex optimization and its application in LPV-IO identification
- Simultaneous off-the-grid learning of mixtures issued from a continuous dictionary
- On group-wise \(\ell_p\) regularization: theory and efficient algorithms
- Exact recovery of the support of piecewise constant images via total variation regularization
- Proximal gradient method with automatic selection of the parameter by automatic differentiation
- Physics informed topology learning in networks of linear dynamical systems
- Modeling interactive components by coordinate kernel polynomial models
- A penalized two-pass regression to predict stock returns with time-varying risk premia
- Variable selection in additive models via hierarchical sparse penalty
- A Nonparametric Graphical Model for Functional Data With Application to Brain Networks Based on fMRI
- Fast projections onto mixed-norm balls with applications
Uses Software
This page was built for publication: Consistency of the group Lasso and multiple kernel learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3096148)