Square-root lasso: pivotal recovery of sparse signals via conic programming
From MaRDI portal
Publication:3107973
Abstract: We propose a pivotal method for estimating high-dimensional sparse linear regression models, where the overall number of regressors is large, possibly much larger than , but only regressors are significant. The method is a modification of the lasso, called the square-root lasso. The method is pivotal in that it neither relies on the knowledge of the standard deviation or nor does it need to pre-estimate . Moreover, the method does not rely on normality or sub-Gaussianity of noise. It achieves near-oracle performance, attaining the convergence rate in the prediction norm, and thus matching the performance of the lasso with known . These performance results are valid for both Gaussian and non-Gaussian errors, under some mild moment restrictions. We formulate the square-root lasso as a solution to a convex conic programming problem, which allows us to implement the estimator using efficient algorithmic methods, such as interior-point and first-order methods.
Recommendations
- Square-root Lasso for high-dimensional sparse linear systems with weakly dependent errors
- Pivotal estimation via square-root lasso in nonparametric regression
- Improved bounds for square-root Lasso and square-root slope
- Pivotal Estimation in High-Dimensional Regression via Linear Programming
- Sharp oracle inequalities for square root regularization
Cited in
(only showing first 100 items - show all)- Prediction bounds for higher order total variation regularized least squares
- Confidence intervals for high-dimensional inverse covariance estimation
- Non-Convex Global Minimization and False Discovery Rate Control for the TREX
- The sparsity of LASSO-type minimizers
- Group inference in high dimensions with applications to hierarchical testing
- Asymptotic normality and optimalities in estimation of large Gaussian graphical models
- Gaussian graphical model estimation with false discovery rate control
- Honest confidence regions and optimality in high-dimensional precision matrix estimation
- A dual semismooth Newton based augmented Lagrangian method for large-scale linearly constrained sparse group square-root Lasso problems
- On asymptotically optimal confidence regions and tests for high-dimensional models
- An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
- Overview of debiased Lasso in high-dimensional linear model
- The benefit of group sparsity in group inference with de-biased scaled group Lasso
- Goodness-of-Fit Tests for High Dimensional Linear Models
- Optimal Estimation of Genetic Relatedness in High-Dimensional Linear Models
- Testing endogeneity with high dimensional covariates
- WARPd: a linearly convergent first-order primal-dual algorithm for inverse problems with approximate sharpness conditions
- Global-local mixtures: a unifying framework
- High-dimensional inference in misspecified linear models
- scientific article; zbMATH DE number 7626791 (Why is no real title available?)
- Square-root Lasso for high-dimensional sparse linear systems with weakly dependent errors
- The Noise Collector for sparse recovery in high dimensions
- Inverse problems are solvable on real number signal processing hardware
- Oracle inequalities for high dimensional vector autoregressions
- Double-estimation-friendly inference for high-dimensional misspecified models
- Group penalized quantile regression
- Variable selection with spatially autoregressive errors: a generalized moments Lasso estimator
- Double Machine Learning for Partially Linear Mixed-Effects Models with Repeated Measurements
- A lava attack on the recovery of sums of dense and sparse signals
- Scale calibration for high-dimensional robust regression
- Inference robust to outliers with \(\ell_1\)-norm penalization
- Worst possible sub-directions in high-dimensional models
- Lasso for sparse linear regression with exponentially \(\beta\)-mixing errors
- Pivotal estimation via square-root lasso in nonparametric regression
- Debiasing the debiased Lasso with bootstrap
- Do log factors matter? On optimal wavelet approximation and the foundations of compressed sensing
- A proximal dual semismooth Newton method for zero-norm penalized quantile regression estimator
- The \(L_1\) penalized LAD estimator for high dimensional linear regression
- Significance testing in non-sparse high-dimensional linear models
- Robust Wasserstein profile inference and applications to machine learning
- Solution paths of variational regularization methods for inverse problems
- Sparse Poisson regression with penalized weighted score function
- Linear regression with sparsely permuted data
- scientific article; zbMATH DE number 7306914 (Why is no real title available?)
- High-dimensional regression with potential prior information on variable importance
- An efficient semismooth Newton method for adaptive sparse signal recovery problems
- Hedonic pricing modelling with unstructured predictors: an application to Italian fashion industry
- Adaptive smoothing algorithms for nonsmooth composite convex minimization
- Sure independence screening for analyzing supersaturated designs
- Adaptive estimation of high-dimensional signal-to-noise ratios
- A general theory of concave regularization for high-dimensional sparse estimation problems
- High-dimensional regression with unknown variance
- High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}
- Ridge regression and asymptotic minimax estimation over spheres of growing dimension
- Sparse identification of posynomial models
- An off-the-grid approach to multi-compartment magnetic resonance fingerprinting
- A study on tuning parameter selection for the high-dimensional lasso
- A fast trans-lasso algorithm with penalized weighted score function
- Multi-block alternating direction method of multipliers for ultrahigh dimensional quantile fused regression
- A two-stage regularization method for variable selection and forecasting in high-order interaction model
- The partial linear model in high dimensions
- A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks
- Simplex QP-based methods for minimizing a conic quadratic objective over polyhedra
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions
- Finite-sample analysis of \(M\)-estimators using self-concordance
- Robust subspace clustering
- Recovery of sums of sparse and dense signals by incorporating graphical structure among predictors
- A fast and effective algorithm for sparse linear regression with \(\ell_p\)-norm data fidelity and elastic net regularization
- Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Regularized estimation in sparse high-dimensional multivariate regression, with application to a DNA methylation study
- Accuracy assessment for high-dimensional linear regression
- Sharp oracle inequalities for square root regularization
- Sign-constrained least squares estimation for high-dimensional regression
- Noise covariance estimation in multi-task high-dimensional linear models
- Sparse HP filter: finding kinks in the COVID-19 contact rate
- Sparse Convoluted Rank Regression in High Dimensions
- Adapting to unknown noise level in sparse deconvolution
- scientific article; zbMATH DE number 7306926 (Why is no real title available?)
- Selective inference with a randomized response
- Online Debiasing for Adaptively Collected High-Dimensional Data With Applications to Time Series Analysis
- A general family of trimmed estimators for robust high-dimensional data analysis
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- A smooth primal-dual optimization framework for nonsmooth composite convex minimization
- Optimal designs in sparse linear models
- Group sparse structural smoothing recovery: model, statistical properties and algorithm
- Inference for high‐dimensional linear models with locally stationary error processes
- Robust oracle estimation and uncertainty quantification for possibly sparse quantiles
- Optimal learning
- Tuning-free heterogeneous inference in massive networks
- A knockoff filter for high-dimensional selective inference
- Smooth over-parameterized solvers for non-smooth structured optimization
- Improved bounds for square-root Lasso and square-root slope
- Penalized and constrained LAD estimation in fixed and high dimension
- Square root LASSO: well-posedness, Lipschitz stability, and the tuning trade-off
- Robust and tuning-free sparse linear regression via square-root slope
- Uniformly valid post-regularization confidence regions for many functional parameters in z-estimation framework
- Greedy variance estimation for the LASSO
- Linear hypothesis testing in dense high-dimensional linear models
This page was built for publication: Square-root lasso: pivotal recovery of sparse signals via conic programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3107973)