Learning sparse gradients for variable selection and dimension reduction
From MaRDI portal
(Redirected from Publication:439003)
Recommendations
- Model-free variable selection in reproducing kernel Hilbert space
- A nonlinear multi-dimensional variable selection method for high dimensional data: sparse MAVE
- Coordinate-independent sparse sufficient dimension reduction and variable selection
- Robust variable selection through MAVE
- Learning the coordinate gradients
Cites work
- scientific article; zbMATH DE number 439380 (Why is no real title available?)
- scientific article; zbMATH DE number 4213315 (Why is no real title available?)
- scientific article; zbMATH DE number 4170917 (Why is no real title available?)
- scientific article; zbMATH DE number 52737 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- 10.1162/153244303322753616
- 10.1162/153244303322753661
- 10.1162/153244303322753751
- A framelet-based image inpainting algorithm
- An Adaptive Estimation of Dimension Reduction Space
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Component selection and smoothing in multivariate nonparametric regression
- Compressed sensing
- Consistency of the group Lasso and multiple kernel learning
- Contour regression: a general approach to dimension reduction
- De-noising by soft-thresholding
- Estimation of gradients and coordinate covariation in classification
- Exploring Regression Structure Using Nonparametric Functional Estimation
- Feature space perspectives for learning the kernel
- Gene selection for cancer classification using support vector machines
- Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data
- Kernel dimension reduction in regression
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Learning Theory
- Learning and approximation by Gaussians on Riemannian manifolds
- Learning coordinate covariances via gradients
- Learning gradients on manifolds
- Least angle regression. (With discussion)
- Manifold regularization: a geometric framework for learning from labeled and unlabeled examples
- Minimal realizations of nonlinear systems
- Multivariate locally weighted least squares regression
- On Learning Vector-Valued Functions
- On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein's Lemma
- Regularization and Variable Selection Via the Elastic Net
- Rodeo: Sparse, greedy nonparametric regression
- Selection of variables and dimension reduction in high-dimensional non-parametric regression
- Signal Recovery by Proximal Forward-Backward Splitting
- Sliced Inverse Regression for Dimension Reduction
- Some properties of invariant sets of a flow
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Structure adaptive approach for dimension reduction.
- Theory & Methods: Special Invited Paper: Dimension Reduction and Visualization in Discriminant Analysis (with discussion)
- Theory of Reproducing Kernels
- Weak convergence and empirical processes. With applications to statistics
Cited in
(16)- Discovering model structure for partially linear models
- Sparse dimension reduction for survival data
- Learning gradients on manifolds
- Learning the coordinate gradients
- Structure learning via unstructured kernel-based M-estimation
- Sparse learning of the disease severity score for high-dimensional data
- High-dimensional local linear regression under sparsity and convex losses
- Variable selection for partially linear models via learning gradients
- Selective factor extraction in high dimensions
- Learning gradients: predictive models that infer geometry and statistical dependence
- Gradient projection Newton algorithm for sparse collaborative learning using synthetic and real datasets of applications
- Learning Reductions to Sparse Sets
- Refined generalization bounds of gradient learning over reproducing kernel Hilbert spaces
- Model-free variable selection in reproducing kernel Hilbert space
- Performance analysis of the LapRSSLG algorithm in learning theory
- Learning gradients from nonidentical data
This page was built for publication: Learning sparse gradients for variable selection and dimension reduction
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q439003)