Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using \ell _{1}-Constrained Quadratic Programming (Lasso)
From MaRDI portal
Publication:4975847
Cited in
(only showing first 100 items - show all)- A bootstrap Lasso+partial ridge method to construct confidence intervals for parameters in high-dimensional sparse linear models
- Optimal variable selection in multi-group sparse discriminant analysis
- Sparse quadratic classification rules via linear dimension reduction
- Adjusting for high-dimensional covariates in sparse precision matrix estimation by \(\ell_1\)-penalization
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Estimating time-varying networks
- High-dimensional linear regression with hard thresholding regularization: theory and algorithm
- Model selection for high-dimensional quadratic regression via regularization
- On estimation error bounds of the Elastic Net when p ≫ n
- A framework for solving mixed-integer semidefinite programs
- Numerical characterization of support recovery in sparse regression with correlated design
- The adaptive Lasso in high-dimensional sparse heteroscedastic models
- Sharp recovery bounds for convex demixing, with applications
- A Mixed-Integer Fractional Optimization Approach to Best Subset Selection
- Recovery of partly sparse and dense signals
- Optimal false discovery control of minimax estimators
- Detection boundary in sparse regression
- On the conditions used to prove oracle results for the Lasso
- LASSO risk and phase transition under dependence
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Learning high-dimensional Gaussian linear structural equation models with heterogeneous error variances
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Penalized logspline density estimation using total variation penalty
- An unbiased approach to compressed sensing
- Sparse regression: scalable algorithms and empirical performance
- Simple expressions of the Lasso and SLOPE estimators in low-dimension
- A power analysis for Model-X knockoffs with \(\ell_p\)-regularized statistics
- Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes
- Distributed Decoding From Heterogeneous 1-Bit Compressive Measurements
- Review of Bayesian selection methods for categorical predictors using JAGS
- High-dimensional dynamic systems identification with additional constraints
- Sorted concave penalized regression
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Bayesian augmented Lagrangian algorithm for system identification
- Least squares after model selection in high-dimensional sparse models
- Sign-constrained least squares estimation for high-dimensional regression
- A global homogeneity test for high-dimensional linear regression
- Revisiting feature selection for linear models with FDR and power guarantees
- Rejoinder: ``Sparse regression: scalable algorithms and empirical performance
- Multiple hypothesis testing for variable selection
- Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling
- Multi-stage convex relaxation for feature selection
- Chaotic analog-to-information conversion: principle and reconstructability with parameter identifiability
- A general family of trimmed estimators for robust high-dimensional data analysis
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- Adaptive log-density estimation
- Regularized estimation of high-dimensional factor-augmented vector autoregressive (FAVAR) models
- Sparse directed acyclic graphs incorporating the covariates
- RIPless compressed sensing from anisotropic measurements
- On semidefinite relaxations for the block model
- Discussion: Latent variable graphical model selection via convex optimization
- Bootstrap inference for network construction with an application to a breast cancer microarray study
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- A significance test for the lasso
- Sparse semiparametric discriminant analysis
- A generalized knockoff procedure for FDR control in structural change detection
- Nearly Dimension-Independent Sparse Linear Bandit over Small Action Spaces via Best Subset Selection
- Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis
- Model selection consistency of Lasso for empirical data
- The generalized Lasso problem and uniqueness
- On constrained and regularized high-dimensional regression
- Rejoinder: Latent variable graphical model selection via convex optimization
- Rate optimal estimation and confidence intervals for high-dimensional regression with missing covariates
- Sparse principal component based high-dimensional mediation analysis
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- The Lasso problem and uniqueness
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- Coordinate ascent for penalized semiparametric regression on high-dimensional panel count data
- High-dimensional change-point estimation: combining filtering with convex optimization
- Generalized Kalman smoothing: modeling and algorithms
- Exploiting prior knowledge in compressed sensing to design robust systems for endoscopy image recovery
- Techniques for accelerating branch-and-bound algorithms dedicated to sparse optimization
- Oracle inequalities for high-dimensional prediction
- Sparse classification: a scalable discrete optimization perspective
- Discussion: Latent variable graphical model selection via convex optimization
- Minimax-optimal nonparametric regression in high dimensions
- Boosting with structural sparsity: a differential inclusion approach
- High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Discussion: ``A significance test for the lasso
- Low complexity regularization of linear inverse problems
- Nonparametric Functional Graphical Modeling Through Functional Additive Regression Operator
- Estimation and variable selection with exponential weights
- Discussion: Latent variable graphical model selection via convex optimization
- Asymptotic theory of \(\ell_1\)-regularized PDE identification from a single noisy trajectory
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison
- Discussion: Latent variable graphical model selection via convex optimization
- A tight bound of hard thresholding
- Variable selection for nonparametric learning with power series kernels
- Prediction and estimation consistency of sparse multi-class penalized optimal scoring
- A tuning-free robust and efficient approach to high-dimensional regression
- A two-stage sequential conditional selection approach to sparse high-dimensional multivariate regression models
- Tensor recovery in high-dimensional Ising models
- On high-dimensional Poisson models with measurement error: hypothesis testing for nonlinear nonconvex optimization
- A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al.
- Improved variable selection with forward-lasso adaptive shrinkage
This page was built for publication: Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4975847)