Square-root lasso: pivotal recovery of sparse signals via conic programming
From MaRDI portal
Publication:3107973
Abstract: We propose a pivotal method for estimating high-dimensional sparse linear regression models, where the overall number of regressors is large, possibly much larger than , but only regressors are significant. The method is a modification of the lasso, called the square-root lasso. The method is pivotal in that it neither relies on the knowledge of the standard deviation or nor does it need to pre-estimate . Moreover, the method does not rely on normality or sub-Gaussianity of noise. It achieves near-oracle performance, attaining the convergence rate in the prediction norm, and thus matching the performance of the lasso with known . These performance results are valid for both Gaussian and non-Gaussian errors, under some mild moment restrictions. We formulate the square-root lasso as a solution to a convex conic programming problem, which allows us to implement the estimator using efficient algorithmic methods, such as interior-point and first-order methods.
Recommendations
- Square-root Lasso for high-dimensional sparse linear systems with weakly dependent errors
- Pivotal estimation via square-root lasso in nonparametric regression
- Improved bounds for square-root Lasso and square-root slope
- Pivotal Estimation in High-Dimensional Regression via Linear Programming
- Sharp oracle inequalities for square root regularization
Cited in
(only showing first 100 items - show all)- Sure independence screening for analyzing supersaturated designs
- Adaptive estimation of high-dimensional signal-to-noise ratios
- A general theory of concave regularization for high-dimensional sparse estimation problems
- High-dimensional regression with unknown variance
- High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}
- Ridge regression and asymptotic minimax estimation over spheres of growing dimension
- Sparse identification of posynomial models
- An off-the-grid approach to multi-compartment magnetic resonance fingerprinting
- A study on tuning parameter selection for the high-dimensional lasso
- A fast trans-lasso algorithm with penalized weighted score function
- Multi-block alternating direction method of multipliers for ultrahigh dimensional quantile fused regression
- A two-stage regularization method for variable selection and forecasting in high-order interaction model
- The partial linear model in high dimensions
- A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks
- Simplex QP-based methods for minimizing a conic quadratic objective over polyhedra
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions
- Finite-sample analysis of \(M\)-estimators using self-concordance
- Robust subspace clustering
- Recovery of sums of sparse and dense signals by incorporating graphical structure among predictors
- A fast and effective algorithm for sparse linear regression with \(\ell_p\)-norm data fidelity and elastic net regularization
- Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Regularized estimation in sparse high-dimensional multivariate regression, with application to a DNA methylation study
- Accuracy assessment for high-dimensional linear regression
- Sharp oracle inequalities for square root regularization
- Sign-constrained least squares estimation for high-dimensional regression
- Noise covariance estimation in multi-task high-dimensional linear models
- Sparse HP filter: finding kinks in the COVID-19 contact rate
- Sparse Convoluted Rank Regression in High Dimensions
- Adapting to unknown noise level in sparse deconvolution
- scientific article; zbMATH DE number 7306926 (Why is no real title available?)
- Selective inference with a randomized response
- Online Debiasing for Adaptively Collected High-Dimensional Data With Applications to Time Series Analysis
- A general family of trimmed estimators for robust high-dimensional data analysis
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- A smooth primal-dual optimization framework for nonsmooth composite convex minimization
- Optimal designs in sparse linear models
- Group sparse structural smoothing recovery: model, statistical properties and algorithm
- Inference for high‐dimensional linear models with locally stationary error processes
- Robust oracle estimation and uncertainty quantification for possibly sparse quantiles
- Optimal learning
- Tuning-free heterogeneous inference in massive networks
- A knockoff filter for high-dimensional selective inference
- Smooth over-parameterized solvers for non-smooth structured optimization
- Improved bounds for square-root Lasso and square-root slope
- Penalized and constrained LAD estimation in fixed and high dimension
- Square root LASSO: well-posedness, Lipschitz stability, and the tuning trade-off
- Robust and tuning-free sparse linear regression via square-root slope
- Uniformly valid post-regularization confidence regions for many functional parameters in z-estimation framework
- Greedy variance estimation for the LASSO
- Linear hypothesis testing in dense high-dimensional linear models
- Regularization for high-dimensional covariance matrix
- Oracle inequalities for high-dimensional prediction
- Optimal sparsity testing in linear regression model
- A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm
- Stable local-smooth principal component pursuit
- Penalised robust estimators for sparse and high-dimensional linear models
- Noisy low-rank matrix completion with general sampling distribution
- Proximal alternating penalty algorithms for nonsmooth constrained convex optimization
- A tuning-free robust and efficient approach to high-dimensional regression
- An inexact augmented Lagrangian method for second-order cone programming with applications
- High-dimensional tests for functional networks of brain anatomic regions
- Post-model-selection inference in linear regression models: an integrated review
- Zero-norm regularized problems: equivalent surrogates, proximal MM method and statistical error bound
- Stein's method for nonlinear statistics: a brief survey and recent progress
- On estimation of the diagonal elements of a sparse precision matrix
- Joint estimation and variable selection for mean and dispersion in proper dispersion models
- Recovering structured signals in noise: least-squares meets compressed sensing
- L0-Regularized Learning for High-Dimensional Additive Hazards Regression
- Thresholding tests based on affine Lasso to achieve non-asymptotic nominal level and high power under sparse and dense alternatives in high dimension
- Estimation of covariance and precision matrix, network structure, and a view toward systems biology
- Prediction error bounds for linear regression with the TREX
- A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates
- The EAS approach for graphical selection consistency in vector autoregression models
- \(\ell_1\)-penalised ordinal polytomous regression estimators with application to gene expression studies
- Econometric estimation with high-dimensional moment equalities
- Perspective functions: proximal calculus and applications in high-dimensional statistics
- Generalization of constraints for high dimensional regression problems
- An efficient two step algorithm for high dimensional change point regression models without grid search
- Self-normalization: taming a wild population in a heavy-tailed world
- Comment on “A Tuning-Free Robust and Efficient Approach to High-Dimensional Regression”
- Best subset selection with shrinkage: sparse additive hazards regression with the grouping effect
- Simultaneous feature selection and clustering based on square root optimization
- A permutation approach for selecting the penalty parameter in penalized model selection
- Variable selection for sparse logistic regression
- Iteratively reweighted \(\ell_1\)-penalized robust regression
- Oracle inequalities for convex loss functions with nonlinear targets
- scientific article; zbMATH DE number 7306909 (Why is no real title available?)
- Sequential Scaled Sparse Factor Regression
- Sharp MSE bounds for proximal denoising
- Nonsparse learning with latent variables
- Inference on the change point under a high dimensional sparse mean shift
- Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons
- Correcting for unknown errors in sparse high-dimensional function approximation
- A projection-based conditional dependence measure with applications to high-dimensional undirected graphical models
- Lasso meets horseshoe: a survey
- Sparse additive models in high dimensions with wavelets
- On the regularized risk of distributionally robust learning over deep neural networks
- High-dimensional inference robust to outliers with ℓ1-norm penalization
This page was built for publication: Square-root lasso: pivotal recovery of sparse signals via conic programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3107973)