SPADES and mixture models
From MaRDI portal
Abstract: This paper studies sparse density estimation via penalization (SPADES). We focus on estimation in high-dimensional mixture models and nonparametric adaptive density estimation. We show, respectively, that SPADES can recover, with high probability, the unknown components of a mixture of probability densities and that it yields minimax adaptive density estimates. These results are based on a general sparsity oracle inequality that the SPADES estimates satisfy. We offer a data driven method for the choice of the tuning parameter used in the construction of SPADES. The method uses the generalized bisection method first introduced in citebb09. The suggested procedure bypasses the need for a grid search and offers substantial computational savings. We complement our theoretical results with a simulation study that employs this method for approximations of one and two-dimensional densities with mixtures. The numerical results strongly support our theoretical findings.
Recommendations
- Sparse Density Estimation with ℓ1 Penalties
- Finite mixture regression: a sparse variable selection by model selection for clustering
- scientific article; zbMATH DE number 5290366
- Adaptive density estimation for clustering with Gaussian mixtures
- Estimation and confidence sets for sparse normal mixtures
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 3789676 (Why is no real title available?)
- scientific article; zbMATH DE number 42384 (Why is no real title available?)
- scientific article; zbMATH DE number 1304261 (Why is no real title available?)
- scientific article; zbMATH DE number 1064642 (Why is no real title available?)
- scientific article; zbMATH DE number 2038320 (Why is no real title available?)
- scientific article; zbMATH DE number 1522808 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- Adapting to unknown sparsity by controlling the false discovery rate
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Aggregation for Gaussian regression
- Atomic decomposition by basis pursuit
- Combinatorial methods in density estimation
- Consistent covariate selection and post model selection inference in semiparametric regression.
- Consistent estimation of mixture complexity.
- De-noising by soft-thresholding
- Density estimation by the penalized combinatorial method
- Dimension reduction and variable selection in case control studies via regularized likelihood optimization
- High-dimensional generalized linear models and the lasso
- High-dimensional graphs and variable selection with the Lasso
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- Learning Theory and Kernel Machines
- Linear and convex aggregation of density estimators
- Model selection in nonparametric regression
- Nonparametric estimation of smooth probability densities in \(L_ 2\)
- Numerical recipes. The art of scientific computing.
- Pathwise coordinate optimization
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Quasi-universal bandwidth selection for kernel density estimators
- Reconstruction of sparse vectors in white Gaussian noise
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse Density Estimation with ℓ1 Penalties
- Sparsity in penalized empirical risk minimization
- Sparsity oracle inequalities for the Lasso
- Stable recovery of sparse overcomplete representations in the presence of noise
- Strongly consistent model selection for densities
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- The Adaptive Lasso and Its Oracle Properties
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Wavelets, approximation, and statistical applications
- \(L^ p\) adaptive density estimation
Cited in
(21)- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- Sparse Density Estimation with ℓ1 Penalties
- Minimax bounds for Besov classes in density estimation
- Penalized logspline density estimation using total variation penalty
- Proximal distance algorithms: theory and practice
- Solution of linear ill-posed problems using overcomplete dictionaries
- Adaptive Dantzig density estimation
- Supermix: sparse regularization for mixtures
- Estimating the amount of sparsity in two-point mixture models
- A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models
- Sparsity considerations for dependent variables
- Parameter recovery in two-component contamination mixtures: the \(L^2\) strategy
- Sparse mixture models inspired by ANOVA decompositions
- Finite sample properties of parametric MMD estimation: robustness to misspecification and dependence
- Optimal Kullback-Leibler aggregation in mixture density estimation by maximum likelihood
- Compressive Gaussian mixture estimation
- Adaptive log-density estimation
- Penalized B-spline estimator for regression functions using total variation penalty
- Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model
- Convex Optimization, Shape Constraints, Compound Decisions, and Empirical Bayes Rules
- High-dimensional additive hazards models and the lasso
This page was built for publication: SPADES and mixture models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q988014)