scientific article; zbMATH DE number 6982301
From MaRDI portal
Publication:4558147
zbMATH Open1444.62091arXiv1701.05128MaRDI QIDQ4558147FDOQ4558147
Authors: Jian Huang, Xiliang Lu, Yu Ling Jiao, Yanyan Liu
Publication date: 21 November 2018
Full work available at URL: https://arxiv.org/abs/1701.05128
Title of this publication is not available (Why is that?)
Recommendations
- On solving \(L_{q}\)-penalized regressions
- Penalized regression combining the \( L_{1}\) norm and a correlation based penalty
- Penalized regression, standard errors, and Bayesian Lassos
- Penalized least-squares estimation for regression coefficients in high-dimensional partially linear models
- Efficient penalized estimation for linear regression model
- Penalized likelihood regression: General formulation and efficient approximation
- Variable selection and estimation using a continuous approximation to the \(L_0\) penalty
- A general framework for prediction in penalized regression
- The \(L_1\) penalized LAD estimator for high dimensional linear regression
oracle propertygeometrical convergenceroot findingsupport detectionKarush-Kuhn-Tucker (KKT) conditionsnonasymptotic error bounds
Cites Work
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Nearly unbiased variable selection under minimax concave penalty
- SparseNet: coordinate descent with nonconvex penalties
- The Adaptive Lasso and Its Oracle Properties
- Least angle regression. (With discussion)
- Pathwise coordinate optimization
- Coordinate descent algorithms for lasso penalized regression
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Title not available (Why is that?)
- High-dimensional graphs and variable selection with the Lasso
- Best subset selection via a modern optimization lens
- Analysis of multi-stage convex relaxation for sparse regularization
- Title not available (Why is that?)
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Variable selection using MM algorithms
- Adaptive Lasso for sparse high-dimensional regression models
- Gradient methods for minimizing composite functions
- Matching pursuits with time-frequency dictionaries
- Title not available (Why is that?)
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Nonconcave penalized likelihood with a diverging number of parameters.
- Decoding by Linear Programming
- Stable recovery of sparse overcomplete representations in the presence of noise
- A new approach to variable selection in least squares problems
- Stable signal recovery from incomplete and inaccurate measurements
- Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models
- Asymptotic properties of bridge estimators in sparse high-dimensional regression models
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Smoothly clipped absolute deviation on high dimensions
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Iterative hard thresholding for compressed sensing
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Greed is Good: Algorithmic Results for Sparse Approximation
- Sparse Recovery With Orthogonal Matching Pursuit Under RIP
- The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing
- A primal dual active set with continuation algorithm for the \(\ell^0\)-regularized optimization problem
- Thresholding-based iterative selection procedures for model selection and shrinkage
- Complexity of unconstrained \(L_2 - L_p\) minimization
- Recovering Sparse Signals With a Certain Family of Nonconvex Penalties and DC Programming
- Sparse Approximate Solutions to Linear Systems
- Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise
- Iterative thresholding for sparse approximations
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Calibrating nonconvex penalized regression in ultra-high dimension
- Strong oracle optimality of folded concave penalized estimation
- Atomic decomposition by basis pursuit
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- Fast Solution of $\ell _{1}$-Norm Minimization Problems When the Solution May Be Sparse
- A proximal-gradient homotopy method for the sparse least-squares problem
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
- Title not available (Why is that?)
Cited In (33)
- Newton method for \(\ell_0\)-regularized optimization
- Subset selection in network-linked data
- An attention algorithm for solving large scale structured \(l_0\)-norm penalty estimation problems
- Fitting sparse linear models under the sufficient and necessary condition for model identification
- Fast best subset selection: coordinate descent and local combinatorial optimization algorithms
- Can’t Ridge Regression Perform Variable Selection?
- High-dimensional linear regression with hard thresholding regularization: theory and algorithm
- A scalable surrogate \(L_0\) sparse regression method for generalized linear models with applications to large scale data
- Sparse HP filter: finding kinks in the COVID-19 contact rate
- Sparse regularization with the ℓ0 norm
- Zero-norm regularized problems: equivalent surrogates, proximal MM method and statistical error bound
- Efficient regularized regression with \(L_0\) penalty for variable selection and network construction
- Subspace Newton method for sparse group \(\ell_0\) optimization problem
- L0-Regularized Learning for High-Dimensional Additive Hazards Regression
- \(\ell_0\)-regularized high-dimensional accelerated failure time model
- Best subset selection with shrinkage: sparse additive hazards regression with the grouping effect
- An extended Newton-type algorithm for \(\ell_2\)-regularized sparse logistic regression and its efficiency for classifying large-scale datasets
- A unified primal dual active set algorithm for nonconvex sparse recovery
- Sparse quantile regression
- A communication-efficient method for ℓ0 regularization linear regression models
- A neutral comparison of algorithms to minimize \(L_0\) penalties for high-dimensional variable selection
- A data-driven line search rule for support recovery in high-dimensional data analysis
- L 0 -regularization for high-dimensional regression with corrupted data
- Solving \(\ell_0\)-penalized problems with simple constraints via the Frank-Wolfe reduced dimension method
- The springback penalty for robust signal recovery
- A network-constrain Weibull AFT model for biomarkers discovery
- Smoothing Newton method for \(\ell^0\)-\(\ell^2\) regularized linear inverse problem
- Communication-efficient estimation for distributed subset selection
- Sparse signal reconstruction via the approximations of \(\ell_0\) quasinorm
- Truncated \(L_1\) regularized linear regression: theory and algorithm
- Sparse regression at scale: branch-and-bound rooted in first-order optimization
- GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee
- High-performance statistical computing in the computing environments of the 2020s
Uses Software
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4558147)