Sign-constrained least squares estimation for high-dimensional regression
From MaRDI portal
Publication:1954143
DOI10.1214/13-EJS818zbMath1327.62422arXiv1202.0889OpenAlexW2962882549MaRDI QIDQ1954143
Publication date: 20 June 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1202.0889
Related Items
Penalized and constrained LAD estimation in fixed and high dimension, Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling, Group subset selection for linear regression, Integer constraints for enhancing interpretability in linear regression, Constrained inference in linear regression, A component lasso, Iteratively reweighted adaptive Lasso for conditional heteroscedastic time series with applications to AR-ARCH type processes, Adaptive ridge estimator in a linear regression model with spherically symmetric error under constraint, Nonnegative estimation and variable selection via adaptive elastic-net for high-dimensional data, The geometry of least squares in the 21st century, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Sparse Topic Modeling: Computational Efficiency, Near-Optimal Algorithms, and Statistical Inference, ESTIMATION OF A HIGH-DIMENSIONAL COUNTING PROCESS WITHOUT PENALTY FOR HIGH-FREQUENCY EVENTS, Efficient sparse portfolios based on composite quantile regression for high-dimensional index tracking, The geometry of hypothesis testing over convex cones: generalized likelihood ratio tests and minimax radii, Bayesian inference for generalized linear model with linear inequality constraints, On asymptotically optimal confidence regions and tests for high-dimensional models, High-dimensional sign-constrained feature selection and grouping, Nonnegative estimation and variable selection under minimax concave penalty for sparse high-dimensional linear regression models, Estimation of positive definite \(M\)-matrices and structure learning for attractive Gaussian Markov random fields
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Relaxed Lasso
- Random projections for the nonnegative least-squares problem
- Least angle regression. (With discussion)
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Network tomography: recent developments
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Better Subset Regression Using the Nonnegative Garrote
- Estimating Network Loss Rates Using Active Tomography
- Network Delay Tomography using Flexicast Experiments
- On the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations
- An interior point Newton-like method for non-negative least-squares problems with degenerate solution
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- The Group Square-Root Lasso: Theoretical Properties and Fast Algorithms
- Learning the parts of objects by non-negative matrix factorization
- Model Selection and Estimation in Regression with Grouped Variables
- Ridge Regression: Biased Estimation for Nonorthogonal Problems