Penalized and constrained LAD estimation in fixed and high dimension
From MaRDI portal
Publication:2122803
DOI10.1007/s00362-021-01229-0OpenAlexW3151973893MaRDI QIDQ2122803
Publication date: 7 April 2022
Published in: Statistical Papers (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00362-021-01229-0
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- OSQP: An Operator Splitting Solver for Quadratic Programs
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Nearly unbiased variable selection under minimax concave penalty
- Variable selection in regression with compositional covariates
- The Adaptive Lasso and Its Oracle Properties
- Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling
- Regression analysis for microbiome compositional data
- The \(L_1\) penalized LAD estimator for high dimensional linear regression
- Statistics for high-dimensional data. Methods, theory and applications.
- The solution path of the generalized lasso
- Relaxed Lasso
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute-error estimators. With comments by Ronald A. Thisted and M. R. Osborne and a rejoinder by the authors
- Nonnegative-Lasso and application in index tracking
- Solving norm constrained portfolio optimization via coordinate-wise descent algorithms
- \(l_1\) regularized multiplicative iterative path algorithm for non-negative generalized linear models
- The dual and degrees of freedom of linearly constrained generalized Lasso
- Asymptotics of least-squares estimators for constrained nonlinear regression
- Least angle regression. (With discussion)
- On the asymptotics of constrained \(M\)-estimation
- Asymptotic normality of \(L_ 1\)-estimators in nonlinear regression
- Variable selection via RIVAL (removing irrelevant variables amidst lasso iterations) and its application to nuclear material detection
- Robust regression through the Huber's criterion and adaptive lasso penalty
- Sign-constrained least squares estimation for high-dimensional regression
- Generalized \(\ell_1\)-penalized quantile regression with linear constraints
- Nonnegative estimation and variable selection under minimax concave penalty for sparse high-dimensional linear regression models
- Nonnegative hierarchical Lasso with a mixed \((1, \frac{1}{2})\)-penalty and a fast solver
- Asymptotic inference for the constrained quantile regression process
- Simultaneous analysis of Lasso and Dantzig selector
- Nonnegative elastic net and application in index tracking
- The linearized alternating direction method of multipliers for sparse group LAD model
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Coordinate descent algorithms for lasso penalized regression
- Cones, matrices and mathematical programming
- Isotonic regression: Another look at the changepoint problem
- Quantile Regression for Large-Scale Applications
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Constrained Statistical Inference
- Penalized and Constrained Optimization: An Application to High-Dimensional Website Advertising
- Algorithms for Fitting the Constrained Lasso
- Spatial smoothing and hot spot detection for CGH data using the fused lasso
- Inequality Constrained Least-Squares Estimation
- Asymptotic Theory of Least Absolute Error Regression
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sparsity and Smoothness Via the Fused Lasso
- Sparse Composite Quantile Regression in Ultrahigh Dimensions With Tuning Parameter Calibration
- New Bounds for Restricted Isometry Constants
- Regularization and Variable Selection Via the Elastic Net
- Optimization
- Robust Statistics
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- ADMM for Penalized Quantile Regression in Big Data