Consistency of the group Lasso and multiple kernel learning
From MaRDI portal
Publication:3096148
Recommendations
Cited in
(only showing first 100 items - show all)- Structured sparsity through convex optimization
- Sparsity in multiple kernel learning
- Multiple Domain and Multiple Kernel Outcome-Weighted Learning for Estimating Individualized Treatment Regimes
- Moving force identification based on group Lasso and compressed sensing
- Oracle inequalities and optimal inference under group sparsity
- A selective review of group selection in high-dimensional models
- Quantile regression with group Lasso for classification
- Large-scale multivariate sparse regression with applications to UK Biobank
- Bridge regression: adaptivity and group selection
- Lasso-based variable selection methods in text regression: the case of short texts
- Variable selection in nonparametric additive models
- Dynamic networks with multi-scale temporal structure
- Consistent group selection with Bayesian high dimensional modeling
- Random feature-based online multi-kernel learning in environments with unknown dynamics
- Sparse quadratic classification rules via linear dimension reduction
- scientific article; zbMATH DE number 6860781 (Why is no real title available?)
- Regularizing multiple kernel learning using response surface methodology
- Sparsity with sign-coherent groups of variables via the cooperative-Lasso
- High-dimensional regression with unknown variance
- The benefit of group sparsity
- Random forest-based approach for physiological functional variable selection for driver's stress level classification
- Multikernel regression with sparsity constraint
- Grouping strategies and thresholding for high dimensional linear models
- High-dimensional grouped folded concave penalized estimation via the LLA algorithm
- On the oracle property of adaptive group Lasso in high-dimensional linear models
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Improving localized multiple kernel learning via radius-margin bound
- Learning rates for the risk of kernel-based quantile regression estimators in additive models
- The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- SpicyMKL: a fast algorithm for multiple kernel learning with thousands of kernels
- Group selection in high-dimensional partially linear additive models
- Improving the prediction performance of the Lasso by subtracting the additive structural noises
- Robust classification using \(\ell _{2,1}\)-norm based regression model
- Penalized estimation in additive varying coefficient models using grouped regularization
- Structured variable selection via prior-induced hierarchical penalty functions
- Covariate-adjusted tensor classification in high dimensions
- Rate optimal estimation and confidence intervals for high-dimensional regression with missing covariates
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- Joint sparse optimization: lower-order regularization method and application in cell fate conversion
- Active-set based block coordinate descent algorithm in group LASSO for self-exciting threshold autoregressive model
- The degrees of freedom of partly smooth regularizers
- Logistic regression: from art to science
- Low complexity regularization of linear inverse problems
- Theoretical properties of the overlapping groups Lasso
- Network classification with applications to brain connectomics
- Estimating sparse networks with hubs
- Sharp oracle inequalities for low-complexity priors
- Comprehensive comparative analysis and identification of RNA-binding protein domains: multi-class classification and feature selection
- Learning causal networks via additive faithfulness
- Analytic center cutting plane method for multiple kernel learning
- Equivalent Lipschitz surrogates for zero-norm and rank optimization problems
- Learning sparse gradients for variable selection and dimension reduction
- Grouped variable selection with discrete optimization: computational and statistical perspectives
- HARFE: hard-ridge random feature expansion
- scientific article; zbMATH DE number 7370569 (Why is no real title available?)
- On extension theorems and their connection to universal consistency in machine learning
- Transductive versions of the Lasso and the Dantzig selector
- Improvement of multiple kernel learning using adaptively weighted regularization
- On a nonlinear extension of the principal fitted component model
- On the asymptotic properties of the group lasso estimator for linear models
- An unexpected connection between Bayes \(A\)-optimal designs and the group Lasso
- Model selection with low complexity priors
- Proximal methods for the latent group lasso penalty
- On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems
- Locally Sparse Function-on-Function Regression
- Local linear convergence of proximal coordinate descent algorithm
- Bayesian mixed effect atlas estimation with a diffeomorphic deformation model
- Trace regression model with simultaneously low rank and row(column) sparse parameter
- Multiple Kernel Learningの学習理論
- Efficient functional Lasso kernel smoothing for high-dimensional additive regression
- Sparse RKHS estimation via globally convex optimization and its application in LPV-IO identification
- Structured variable selection with sparsity-inducing norms
- On group-wise \(\ell_p\) regularization: theory and efficient algorithms
- Simultaneous off-the-grid learning of mixtures issued from a continuous dictionary
- Sparse high-dimensional varying coefficient model: nonasymptotic minimax study
- OR forum: An algorithmic approach to linear regression
- A Bayesian approach to sparse dynamic network identification
- Proximal gradient method with automatic selection of the parameter by automatic differentiation
- Exact recovery of the support of piecewise constant images via total variation regularization
- Self-concordant analysis for logistic regression
- On proximal gradient method for the convex problems regularized with the group reproducing kernel norm
- Physics informed topology learning in networks of linear dynamical systems
- Modeling interactive components by coordinate kernel polynomial models
- The benefit of group sparsity in group inference with de-biased scaled group Lasso
- Lasso in Infinite dimension: application to variable selection in functional multivariate linear regression
- A penalized two-pass regression to predict stock returns with time-varying risk premia
- Variable selection in additive models via hierarchical sparse penalty
- A Nonparametric Graphical Model for Functional Data With Application to Brain Networks Based on fMRI
- On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization
- Fast projections onto mixed-norm balls with applications
- Sparse hierarchical regression with polynomials
- Automatic component selection in additive modeling of French national electricity load forecasting
- A group VISA algorithm for variable selection
- Copula Gaussian Graphical Models for Functional Data
- Pareto-efficient designs for multi- and mixed-level supersaturated designs
- Sampling from non-smooth distributions through Langevin diffusion
- The variable selection methods and algorithms in the multiple linear model
- Nonparametric and high-dimensional functional graphical models
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
This page was built for publication: Consistency of the group Lasso and multiple kernel learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3096148)