Learning sparse conditional distribution: an efficient kernel-based approach
From MaRDI portal
Publication:2044348
DOI10.1214/21-EJS1824zbMath1471.62323MaRDI QIDQ2044348
Junhui Wang, Fang Chen, Xin He
Publication date: 9 August 2021
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Applications of statistics to economics (62P20) Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Computational learning theory (68Q32) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Measuring and testing dependence by correlation of distances
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- On constrained and regularized high-dimensional regression
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs
- Component selection and smoothing in multivariate nonparametric regression
- Derivative reproducing properties for kernel methods in learning theory
- Variable selection in nonparametric additive models
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Nonlinear time series. Nonparametric and parametric methods
- Quantile-adaptive model-free variable screening for high-dimensional heterogeneous data
- A knockoff filter for high-dimensional selective inference
- Converting high-dimensional regression to high-dimensional conditional density estimation
- Distributed inference for quantile regression processes
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Approximating conditional distribution functions using dimension reduction
- Learning theory estimates via integral operators and their approximations
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Nonparametric sparsity and regularization
- Forward Regression for Ultra-High Dimensional Variable Screening
- Nonparametric Independence Screening in Sparse Ultra-High-Dimensional Additive Models
- Consistency of learning algorithms using Attouch–Wets convergence
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- Regression Quantiles
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Methods for Estimating a Conditional Distribution Function
- Kernel Distribution Embeddings: Universal Kernels, Characteristic Kernels and Kernel Metrics on Distributions
- Gradient-induced Model-free Variable Selection with Composite Quantile Regression
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- 10.1162/153244303321897690
- Likelihood-Based Selection and Sharp Parameter Estimation
- Gradient-Based Kernel Dimension Reduction for Regression
- High Dimensional Ordinary Least Squares Projection for Screening Variables
- Universality, Characteristic Kernels and RKHS Embedding of Measures
- Nonparametric Estimation of the Conditional Distribution at Regression Boundary Points
This page was built for publication: Learning sparse conditional distribution: an efficient kernel-based approach