scientific article
From MaRDI portal
Publication:3174050
zbMath1222.62008MaRDI QIDQ3174050
Publication date: 12 October 2011
Full work available at URL: http://www.jmlr.org/papers/v7/zhao06a.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items
Combining a relaxed EM algorithm with Occam's razor for Bayesian variable selection in high-dimensional regression, Worst possible sub-directions in high-dimensional models, Influence measures and stability for graphical models, Bayesian structure learning in sparse Gaussian graphical models, High dimensional discrimination analysis via a semiparametric model, Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling, Best subset selection via a modern optimization lens, An analysis of penalized interaction models, Statistical consistency of coefficient-based conditional quantile regression, Asymptotic properties of lasso in high-dimensional partially linear models, Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model, Joint estimation of precision matrices in heterogeneous populations, Finding causative genes from high-dimensional data: an appraisal of statistical and machine learning approaches, Designing penalty functions in high dimensional problems: the role of tuning parameters, Oracle inequalities, variable selection and uniform inference in high-dimensional correlated random effects panel data models, Testing a single regression coefficient in high dimensional linear models, A rank-corrected procedure for matrix completion with fixed basis coefficients, AIC for the Lasso in generalized linear models, Variable selection for additive partial linear quantile regression with missing covariates, Asymtotics of Dantzig selector for a general single-index model, Adaptive bridge estimation for high-dimensional regression models, Sub-optimality of some continuous shrinkage priors, Split Bregman algorithms for sparse group lasso with application to MRI reconstruction, Strong consistency of Lasso estimators, The adaptive Lasso in high-dimensional sparse heteroscedastic models, Oracle inequalities for the lasso in the Cox model, Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap, Statistical significance in high-dimensional linear models, Multivariate Bernoulli distribution, Stability, Penalized profiled semiparametric estimating functions, Sparse recovery under matrix uncertainty, On constrained and regularized high-dimensional regression, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Adjusting for high-dimensional covariates in sparse precision matrix estimation by \(\ell_1\)-penalization, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Variable selection and regression analysis for graph-structured covariates with an application to genomics, Correlated variables in regression: clustering and sparse estimation, Block coordinate descent algorithms for large-scale sparse multiclass classification, \(\ell_{1}\)-penalization for mixture regression models, Consistent group selection in high-dimensional linear regression, Shrinkage estimation for identification of linear components in additive models, Profiled adaptive elastic-net procedure for partially linear models with high-dimensional covar\-i\-ates, Adaptive Dantzig density estimation, Group selection in high-dimensional partially linear additive models, Autoregressive process modeling via the Lasso procedure, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Coordinate ascent for penalized semiparametric regression on high-dimensional panel count data, Non-convex penalized estimation in high-dimensional models with single-index structure, Oracle properties of SCAD-penalized support vector machine, A majorization-minimization approach to variable selection using spike and slab priors, Sparse regression learning by aggregation and Langevin Monte-Carlo, Mirror averaging with sparsity priors, The log-linear group-lasso estimator and its asymptotic properties, Transductive versions of the Lasso and the Dantzig selector, Estimation in high-dimensional linear models with deterministic design matrices, Boosting algorithms: regularization, prediction and model fitting, Regularization for Cox's proportional hazards model with NP-dimensionality, Learning high-dimensional directed acyclic graphs with latent and selection variables, Generalization of constraints for high dimensional regression problems, Quadratic approximation on SCAD penalized estimation, Model selection via adaptive shrinkage with \(t\) priors, Parametric or nonparametric? A parametricness index for model selection, Sparse regression and support recovery with \(\mathbb{L}_2\)-boosting algorithms, Bayesian high-dimensional screening via MCMC, Variable selection in infinite-dimensional problems, Factor models and variable selection in high-dimensional regression analysis, The variational Garrote, Covariate assisted screening and estimation, A new perspective on least squares under convex constraint, CAM: causal additive models, high-dimensional order search and penalized regression, High-dimensional Bayesian inference in nonparametric additive models, Generalized M-estimators for high-dimensional Tobit I models, On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property, Oracle inequalities for high dimensional vector autoregressions, Going beyond oracle property: selection consistency and uniqueness of local solution of the generalized linear model, Weighted \(\ell_1\)-penalized corrected quantile regression for high dimensional measurement error models, On nonparametric feature filters in electromagnetic imaging, Interpreting latent variables in factor models via convex optimization, Penalized least squares estimation with weakly dependent data, Tuning parameter selection for the adaptive LASSO in the autoregressive model, Shrinkage tuning parameter selection in precision matrices estimation, Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error, Improved variable selection with forward-lasso adaptive shrinkage, Estimation of high-dimensional low-rank matrices, Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces, On asymptotically optimal confidence regions and tests for high-dimensional models, Estimator selection in the Gaussian setting, Nearly unbiased variable selection under minimax concave penalty, Selection by partitioning the solution paths, Kernel Knockoffs Selection for Nonparametric Additive Models, A unified approach to model selection and sparse recovery using regularized least squares, Confidence intervals for high-dimensional inverse covariance estimation, Latent variable graphical model selection via convex optimization, An Automated Approach Towards Sparse Single-Equation Cointegration Modelling, Factor-Adjusted Regularized Model Selection, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates, On the oracle property of adaptive group Lasso in high-dimensional linear models, Sparse estimation via nonconcave penalized likelihood in factor analysis model, Penalized likelihood regression for generalized linear models with non-quadratic penalties, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, A Note on High-Dimensional Linear Regression With Interactions, Variable selection, monotone likelihood ratio and group sparsity, Variable selection for high‐dimensional generalized linear model with block‐missing data, Byzantine-robust distributed sparse learning for \(M\)-estimation, Gene-environment interaction analysis under the Cox model, Assessing mediating effects of high‐dimensional microbiome measurements in dietary intervention studies, Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, High-dimensional \(M\)-estimation for Byzantine-robust decentralized learning, Distributed Sparse Composite Quantile Regression in Ultrahigh Dimensions, A communication-efficient method for ℓ0 regularization linear regression models, Global debiased DC estimations for biased estimators via pro forma regression, Nonparametric Functional Graphical Modeling Through Functional Additive Regression Operator, High-Dimensional Gaussian Graphical Regression Models with Covariates, Ultra-High Dimensional Variable Selection for Doubly Robust Causal Inference, Sparse vector error correction models with application to cointegration‐based trading, Penalized wavelet nonparametric univariate logistic regression for irregular spaced data, Penalized Regression for Multiple Types of Many Features With Missing Data, Weak Signal Identification and Inference in Penalized Likelihood Models for Categorical Responses, Sparse and Low-Rank Matrix Quantile Estimation With Application to Quadratic Regression, Path algorithms for fused lasso signal approximator with application to COVID‐19 spread in Korea, svReg: Structural varying‐coefficient regression to differentiate how regional brain atrophy affects motor impairment for Huntington disease severity groups, A penalized structural equation modeling method accounting for secondary phenotypes for variable selection on genetically regulated expression from PrediXcan for Alzheimer's disease, Bootstrapping some GLM and survival regression variable selection estimators, Controlling False Discovery Rate Using Gaussian Mirrors, Kernel Ordinary Differential Equations, Individual Data Protected Integrative Regression Analysis of High-Dimensional Heterogeneous Data, An ensemble EM algorithm for Bayesian variable selection, The EAS approach to variable selection for multivariate response data in high-dimensional settings, A power analysis for Model-X knockoffs with \(\ell_p\)-regularized statistics, Robust inference for high‐dimensional single index models, Online inference in high-dimensional generalized linear models with streaming data, Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design, High-Dimensional Censored Regression via the Penalized Tobit Likelihood, A generalized knockoff procedure for FDR control in structural change detection, Latent Network Structure Learning From High-Dimensional Multivariate Point Processes, Subspace Estimation with Automatic Dimension and Variable Selection in Sufficient Dimension Reduction, Statistical Learning for Individualized Asset Allocation, Multi-Task Learning with High-Dimensional Noisy Images, Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis, Multi-task sparse identification for closed-loop systems with general observation sequences, On linear models for discrete operator inference in time dependent problems, Distributed sparse identification for stochastic dynamic systems under cooperative non-persistent excitation condition, Variable selection and regularization via arbitrary rectangle-range generalized elastic net, A global two-stage algorithm for non-convex penalized high-dimensional linear regression problems, Locally Sparse Function-on-Function Regression, Regularized Linear Programming Discriminant Rule with Folded Concave Penalty for Ultrahigh-Dimensional Data, Local linear convergence of proximal coordinate descent algorithm, Measures of Uncertainty for Shrinkage Model Selection, Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates, Asset splitting algorithm for ultrahigh dimensional portfolio selection and its theoretical property, DIF statistical inference without knowing anchoring items, Sparse and simple structure estimation via prenet penalization, A tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regression, Estimation and inference in sparse multivariate regression and conditional Gaussian graphical models under an unbalanced distributed setting, High-dimensional functional graphical model structure learning via neighborhood selection approach, A joint estimation for the high-dimensional regression modeling on stratified data, Sparse Identification and Estimation of Large-Scale Vector AutoRegressive Moving Averages, Unnamed Item, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Nonnegative-Lasso and application in index tracking, Adaptive and reversed penalty for analysis of high-dimensional correlated data, Fitting sparse linear models under the sufficient and necessary condition for model identification, LOL selection in high dimension, Nonsmoothness in machine learning: specific structure, proximal identification, and applications, Estimation of an oblique structure via penalized likelihood factor analysis, Model selection consistency of Lasso for empirical data, A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model, Penalized logspline density estimation using total variation penalty, On the post selection inference constant under restricted isometry properties, Some sharp performance bounds for least squares regression with \(L_1\) regularization, Near-ideal model selection by \(\ell _{1}\) minimization, High-dimensional variable selection, Properties and refinements of the fused Lasso, Sparsity in penalized empirical risk minimization, On the distribution of penalized maximum likelihood estimators: the LASSO, SCAD, and thresholding, Statistics for big data: a perspective, Nonparametric independence screening via favored smoothing bandwidth, Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments, Variable selection and parameter estimation with the Atan regularization method, Homogeneity detection for the high-dimensional generalized linear model, Inference for biased transformation models, Iteratively reweighted adaptive Lasso for conditional heteroscedastic time series with applications to AR-ARCH type processes, Structured variable selection via prior-induced hierarchical penalty functions, On stepwise pattern recovery of the fused Lasso, On the sign consistency of the Lasso for the high-dimensional Cox model, Moderately clipped Lasso, Finding Dantzig selectors with a proximity operator based fixed-point algorithm, The dual and degrees of freedom of linearly constrained generalized Lasso, Forecasting macroeconomic variables in data-rich environments, Generalized Kalman smoothing: modeling and algorithms, An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors, A doubly sparse approach for group variable selection, Exact support recovery for sparse spikes deconvolution, Optimal variable selection in multi-group sparse discriminant analysis, Bayesian linear regression with sparse priors, Controlling the false discovery rate via knockoffs, Robust estimation for an inverse problem arising in multiview geometry, Quantile regression for additive coefficient models in high dimensions, Model selection via standard error adjusted adaptive Lasso, Efficient nonconvex sparse group feature selection via continuous and discrete optimization, Bridge estimators and the adaptive Lasso under heteroscedasticity, Model selection and estimation in high dimensional regression models with group SCAD, Learning theory approach to a system identification problem involving atomic norm, Variable selection via RIVAL (removing irrelevant variables amidst lasso iterations) and its application to nuclear material detection, \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors, Discussion: Latent variable graphical model selection via convex optimization, Rejoinder: Latent variable graphical model selection via convex optimization, Online streaming feature selection using rough sets, A penalized likelihood method for structural equation modeling, Finite mixture regression: a sparse variable selection by model selection for clustering, Sparse support recovery using correlation information in the presence of additive noise, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, Estimating networks with jumps, The Lasso problem and uniqueness, Sparse least trimmed squares regression for analyzing high-dimensional large data sets, Bootstrap inference for network construction with an application to a breast cancer microarray study, On the asymptotic properties of the group lasso estimator for linear models, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, Selection of variables and dimension reduction in high-dimensional non-parametric regression, Thresholding-based iterative selection procedures for model selection and shrinkage, On the conditions used to prove oracle results for the Lasso, On the total variation regularized estimator over a class of tree graphs, Simultaneous variable selection and smoothing for high-dimensional function-on-scalar regression, Efficient LED-SAC sparse estimator using fast sequential adaptive coordinate-wise optimization (LED-2SAC), A note on the Lasso for Gaussian graphical model selection, A systematic review on model selection in high-dimensional regression, Discussion: One-step sparse estimates in nonconcave penalized likelihood models, Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models, The sparsity and bias of the LASSO selection in high-dimensional linear regression, ``Preconditioning for feature selection and regression in high-dimensional problems, Determination of vector error correction models in high dimensions, Pathwise coordinate optimization for sparse learning: algorithm and theory, Variable selection in multivariate linear models with high-dimensional covariance matrix estimation, Regularization and the small-ball method. I: Sparse recovery, Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space, Uniformly valid confidence sets based on the Lasso, Shrinkage and model selection with correlated variables via weighted fusion, The predictive Lasso, Nonconcave penalized composite conditional likelihood estimation of sparse Ising models, Image denoising via solution paths, High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression, Least angle and \(\ell _{1}\) penalized regression: a review, High-dimensional Gaussian model selection on a Gaussian design, Variable selection in nonparametric additive models, SPADES and mixture models, Feature selection guided by structural information, Sparse factor regression via penalized maximum likelihood estimation, Broken adaptive ridge regression and its asymptotic properties, Lasso-type recovery of sparse representations for high-dimensional data, Some theoretical results on the grouped variables Lasso, Variable selection with Hamming loss, Sparse recovery via differential inclusions, Tight conditions for consistency of variable selection in the context of high dimensionality, Bayesian adaptive Lasso, SCAD-penalized regression in high-dimensional partially linear models, Elastic-net regularization in learning theory, Sparsistency and rates of convergence in large covariance matrix estimation, Estimating high-dimensional intervention effects from observational data, High-dimensional model recovery from random sketched data by exploring intrinsic sparsity, Dynamic networks with multi-scale temporal structure, Sparse Laplacian shrinkage with the graphical Lasso estimator for regression problems, Approximation to stochastic variance reduced gradient Langevin dynamics by stochastic delay differential equations, Sparse high-dimensional linear regression. Estimating squared error and a phase transition, Iterative algorithm for discrete structure recovery, Adaptive log-density estimation, On the strong oracle property of concave penalized estimators with infinite penalty derivative at the origin, De-biasing the Lasso with degrees-of-freedom adjustment, A generative approach to modeling data with quantitative and qualitative responses, Ridge regression revisited: debiasing, thresholding and bootstrap, High-dimensional regression with potential prior information on variable importance, Change points detection and parameter estimation for multivariate time series, Random weighting in LASSO regression, Nonparametric estimation of the random coefficients model: an elastic net approach, Bayesian factor-adjusted sparse regression, Single-index composite quantile regression for ultra-high-dimensional data, A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations, Large-scale multivariate sparse regression with applications to UK Biobank, Robust post-selection inference of high-dimensional mean regression with heavy-tailed asymmetric or heteroskedastic errors, Robust change point detection method via adaptive LAD-Lasso, Nonconcave penalized estimation in sparse vector autoregression model, Nonparametric variable selection and its application to additive models, Hierarchical inference for genome-wide association studies: a view on methodology with software, Debiasing the debiased Lasso with bootstrap, Robust machine learning by median-of-means: theory and practice, Lasso guarantees for \(\beta \)-mixing heavy-tailed time series, Statistical analysis of sparse approximate factor models, Fundamental limits of exact support recovery in high dimensions, Self-concordant analysis for logistic regression, Sparse regression with exact clustering, Adaptive estimation of covariance matrices via Cholesky decomposition, The Lasso as an \(\ell _{1}\)-ball model selection procedure, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence, Robust regression through the Huber's criterion and adaptive lasso penalty, The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Least squares after model selection in high-dimensional sparse models, Sparse directed acyclic graphs incorporating the covariates, Model selection for high-dimensional linear regression with dependent observations, Which bridge estimator is the best for variable selection?, Sparse identification of truncation errors, Sparse regression: scalable algorithms and empirical performance, A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al., Rejoinder: ``Sparse regression: scalable algorithms and empirical performance, Consistent group selection with Bayesian high dimensional modeling, A partially proximal linearized alternating minimization method for finding Dantzig selectors, Parallel integrative learning for large-scale multi-response regression with incomplete outcomes, Bayesian model selection for high-dimensional Ising models, with applications to educational data, A new scope of penalized empirical likelihood with high-dimensional estimating equations, A significance test for the lasso, Discussion: ``A significance test for the lasso, Rejoinder: ``A significance test for the lasso, Lasso with long memory regression errors, High-dimensional mean estimation via \(\ell_1\) penalized normal likelihood, Sparse wavelet regression with multiple predictive curves, Sparse system identification for stochastic systems with general observation sequences, Sparse semiparametric discriminant analysis, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Sparse trace norm regularization, Sparse principal component based high-dimensional mediation analysis, Rate optimal estimation and confidence intervals for high-dimensional regression with missing covariates, Adaptive group Lasso for high-dimensional generalized linear models, Parametric and semiparametric reduced-rank regression with flexible sparsity, A distribution-based Lasso for a general single-index model, Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations, Simultaneous feature selection and clustering based on square root optimization, Variable selection in partially linear additive hazards model with grouped covariates and a diverging number of parameters, An efficient algorithm for joint feature screening in ultrahigh-dimensional Cox's model, Estimation and optimal structure selection of high-dimensional Toeplitz covariance matrix, A unified primal dual active set algorithm for nonconvex sparse recovery, Robust high-dimensional factor models with applications to statistical machine learning, Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions, High-dimensional variable selection via low-dimensional adaptive learning, Graphical-model based high dimensional generalized linear models, Iteratively reweighted \(\ell_1\)-penalized robust regression, Adaptive function-on-scalar regression with a smoothing elastic net, \(\ell_{2,0}\)-norm based selection and estimation for multivariate generalized linear models, Evaluating visual properties via robust HodgeRank, Second-order Stein: SURE for SURE and other applications in high-dimensional inference, Prediction bounds for higher order total variation regularized least squares, High-dimensional structure learning of sparse vector autoregressive models using fractional marginal pseudo-likelihood, Comparing six shrinkage estimators with large sample theory and asymptotically optimal prediction intervals, Nonnegative estimation and variable selection under minimax concave penalty for sparse high-dimensional linear regression models, Optimal linear discriminators for the discrete choice model in growing dimensions, In defense of the indefensible: a very naïve approach to high-dimensional inference, Robust subset selection, Smoothly adaptively centered ridge estimator, Feature selection for data integration with mixed multiview data, Network differential connectivity analysis, Variable selection in convex quantile regression: \(\mathcal{L}_1\)-norm or \(\mathcal{L}_0\)-norm regularization?, Lasso regression and its application in forecasting macro economic indicators: a study on Vietnam's exports, Robust moderately clipped LASSO for simultaneous outlier detection and variable selection, Information criteria bias correction for group selection, Penalized wavelet estimation and robust denoising for irregular spaced data, High-dimensional linear regression with hard thresholding regularization: theory and algorithm, Sparse regression at scale: branch-and-bound rooted in first-order optimization, Bayesian frequentist bounds for machine learning and system identification, On regularization of generalized maximum entropy for linear models, Multivariate sparse Laplacian shrinkage for joint estimation of two graphical structures, Revisiting feature selection for linear models with FDR and power guarantees, On Path Restoration for Censored Outcomes, Goodness-of-fit tests for high-dimensional Gaussian linear models, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, The Loss Rank Criterion for Variable Selection in Linear Regression Analysis, Identification of Partially Linear Structure in Additive Models with an Application to Gene Expression Prediction from Sequences, Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing, A component lasso, Sensitivity Analysis for Mirror-Stratifiable Convex Functions, Meta‐analysis based variable selection for gene expression data, Simultaneous analysis of Lasso and Dantzig selector, Tuning Parameter Selection in the LASSO with Unspecified Propensity, Covariance-Regularized Regression and Classification for high Dimensional Problems, A Bayesian approach to sparse dynamic network identification, Oracle Estimation of a Change Point in High-Dimensional Quantile Regression, A note on the asymptotic distribution of lasso estimator for correlated data, Nonnegative elastic net and application in index tracking, A statistical framework for pathway and gene identification from integrative analysis, Sparsest representations and approximations of an underdetermined linear system, Searching for minimal optimal neural networks, SONIC: social network analysis with influencers and communities, Tests for Coefficients in High-dimensional Additive Hazard Models, Laplace Error Penalty-based Variable Selection in High Dimension, L1-norm-based principal component analysis with adaptive regularization, Graph-based sparse linear discriminant analysis for high-dimensional classification, Interquantile shrinkage in spatial additive autoregressive models, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Optimal estimation of direction in regression models with large number of parameters, Variable Selection for Model-Based High-Dimensional Clustering and Its Application to Microarray Data, Blessing of massive scale: spatial graphical model estimation with a total cardinality constraint approach, A simple homotopy proximal mapping algorithm for compressive sensing, Asymptotic properties of bridge estimators in sparse high-dimensional regression models, High-dimensional generalized linear models and the lasso, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, When Ramanujan meets time-frequency analysis in complicated time series analysis, High-dimensional VARs with common factors, Support union recovery in high-dimensional multivariate regression, A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization, Goodness-of-Fit Tests for High Dimensional Linear Models, Envelope-based sparse reduced-rank regression for multivariate linear model, Recovery of partly sparse and dense signals, Adaptive Lasso for generalized linear models with a diverging number of parameters, Multi-stage convex relaxation for feature selection, High-dimensional sparse portfolio selection with nonnegative constraint, Calibrating nonconvex penalized regression in ultra-high dimension, Structure estimation for discrete graphical models: generalized covariance matrices and their inverses, Estimation and variable selection with exponential weights, Adaptive robust variable selection, Adaptive Lasso estimators for ultrahigh dimensional generalized linear models, Multiple predictingK-fold cross-validation for model selection, DASSO: Connections Between the Dantzig Selector and Lasso, L2RM: Low-Rank Linear Regression Models for High-Dimensional Matrix Responses, Greedy forward regression for variable screening, Unnamed Item, Unnamed Item, Unnamed Item, High-Dimensional Cox Models: The Choice of Penalty as Part of the Model Building Process, Randomized pick-freeze for sparse Sobol indices estimation in high dimension, Boosting with structural sparsity: a differential inclusion approach, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, Sorted concave penalized regression, Asymptotic properties of the residual bootstrap for Lasso estimators, Strong oracle optimality of folded concave penalized estimation, Endogeneity in high dimensions, A note on the one-step estimator for ultrahigh dimensionality, Low Complexity Regularization of Linear Inverse Problems, Lazy lasso for local regression, Asymptotic properties of concave \(L_1\)-norm group penalties, BAYESIAN HYPER-LASSOS WITH NON-CONVEX PENALIZATION, Variable Selection in Linear Mixed Models Using an Extended Class of Penalties, QUADRO: a supervised dimension reduction method via Rayleigh quotient optimization, Principal component analysis in the local differential privacy model, Convergence and sparsity of Lasso and group Lasso in high-dimensional generalized linear models, PenPC : A two-step approach to estimate the skeletons of high-dimensional directed acyclic graphs, Lasso meets horseshoe: a survey, Variable selection with spatially autoregressive errors: a generalized moments Lasso estimator, Regularization methods for high-dimensional sparse control function models, A review of Gaussian Markov models for conditional independence, Changepoint detection by the quantile Lasso method, A Necessary Condition for the Strong Oracle Property, High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking, Variable selection for high-dimensional regression models with time series and heteroscedastic errors, Pseudo estimation and variable selection in regression, Efficient Threshold Selection for Multivariate Total Variation Denoising, Lasso–type and Heuristic Strategies in Model Selection and Forecasting, Tuning parameter calibration for \(\ell_1\)-regularized logistic regression, Weak Convergence of the Regularization Path in Penalized M-Estimation, Unnamed Item, On the asymptotic variance of the debiased Lasso, A knockoff filter for high-dimensional selective inference, A consistent and numerically efficient variable selection method for sparse Poisson regression with applications to learning and signal recovery, Risk bound of transfer learning using parametric feature mapping and its application to sparse coding, Linear Hypothesis Testing in Dense High-Dimensional Linear Models, Weaker regularity conditions and sparse recovery in high-dimensional regression, Variable selection and estimation in generalized linear models with the seamless ${\it L}_{{\rm 0}}$ penalty, Sparsistency and agnostic inference in sparse PCA, Sure Independence Screening for Ultrahigh Dimensional Feature Space, Stability Selection, On model selection consistency of regularized M-estimators, Preconditioning the Lasso for sign consistency, Sparse learning via Boolean relaxations, Sequential profile Lasso for ultra-high-dimensional partially linear models, The robust desparsified lasso and the focused information criterion for high-dimensional generalized linear models, Integrating Multisource Block-Wise Missing Data in Model Selection, Investigating competition in financial markets: a sparse autologistic model for dynamic network data, Modeling Pregnancy Outcomes Through Sequentially Nested Regression Models, REMI: REGRESSION WITH MARGINAL INFORMATION AND ITS APPLICATION IN GENOME-WIDE ASSOCIATION STUDIES, Elastic-net Regularized High-dimensional Negative Binomial Regression: Consistency and Weak Signal Detection, Semi-Standard Partial Covariance Variable Selection When Irrepresentable Conditions Fail, The revisited knockoffs method for variable selection in L1-penalized regressions, Variable Selection With Second-Generation P-Values, Ising model selection using ℓ 1-regularized linear regression: a statistical mechanics analysis*, L0-Regularized Learning for High-Dimensional Additive Hazards Regression, Sparse reduced-rank regression for multivariate varying-coefficient models, Bayesian bootstrap adaptive lasso estimators of regression models, Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles, MIP-BOOST: Efficient and Effective L0 Feature Selection for Linear Regression, Unnamed Item, Multi-step adaptive elastic-net: reducing false positives in high-dimensional variable selection, Variable selection in heteroscedastic single-index quantile regression, Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models, In defense of LASSO, A penalized approach to covariate selection through quantile regression coefficient models, Sparse linear regression models of high dimensional covariates with non-Gaussian outliers and Berkson error-in-variable under heteroscedasticity, Adaptive Bayesian SLOPE: Model Selection With Incomplete Data, Variable selection and forecasting via automated methods for linear models: LASSO/adaLASSO and Autometrics, Exploiting Disagreement Between High-Dimensional Variable Selectors for Uncertainty Visualization, Consistent model selection procedure for general integer-valued time series, Variable selection for semiparametric random-effects conditional density models with longitudinal data, A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure, Adaptive k-class estimation in high-dimensional linear models, On estimation error bounds of the Elastic Net when p ≫ n, Review of Bayesian selection methods for categorical predictors using JAGS, Unnamed Item, Asymptotic Theory of \(\boldsymbol \ell _1\) -Regularized PDE Identification from a Single Noisy Trajectory, Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity, Rapid penalized likelihood-based outlier detection via heteroskedasticity test, Model selection in high-dimensional noisy data: a simulation study, Unnamed Item, Fast and approximate exhaustive variable selection for generalised linear models with APES, Variational Bayes for High-Dimensional Linear Regression With Sparse Priors, Optimal Sparse Linear Prediction for Block-missing Multi-modality Data Without Imputation, On a Semiparametric Data‐Driven Nonlinear Model with Penalized Spatio‐Temporal Lag Interactions, Grouped penalization estimation of the osteoporosis data in the traditional Chinese medicine, Independently Interpretable Lasso for Generalized Linear Models, Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery, Generalized Regression Estimators with High-Dimensional Covariates, A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models, Time-varying Hazards Model for Incorporating Irregularly Measured, High-Dimensional Biomarkers, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Penalized Interaction Estimation for Ultrahigh Dimensional Quadratic Regression, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Truncated $L^1$ Regularized Linear Regression: Theory and Algorithm, Consistent parameter estimation for Lasso and approximate message passing, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Variable selection in partially linear wavelet models, Unnamed Item, The EBIC and a sequential procedure for feature selection in interactive linear models with high-dimensional data, Structured sparsity through convex optimization, Quasi-likelihood and/or robust estimation in high dimensions, A selective review of group selection in high-dimensional models, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, Performance bounds for parameter estimates of high-dimensional linear models with correlated errors, Model Selection for High-Dimensional Quadratic Regression via Regularization, High-dimensional linear model selection motivated by multiple testing, Variable Selection for Nonparametric Learning with Power Series Kernels, Model selection procedure for high‐dimensional data, Skills in demand for ICT and statistical occupations: Evidence from web‐based job vacancies, Model selection and parameter estimation of a multinomial logistic regression model, Simple expressions of the LASSO and SLOPE estimators in low-dimension, Unnamed Item, The Doubly Adaptive LASSO for Vector Autoregressive Models, Discussion: One-step sparse estimates in nonconcave penalized likelihood models, Robust Variable and Interaction Selection for Logistic Regression and General Index Models, cmenet: A New Method for Bi-Level Variable Selection of Conditional Main Effects, Simulation-based Value-at-Risk for nonlinear portfolios, Nonsparse Learning with Latent Variables, Skinny Gibbs: A Consistent and Scalable Gibbs Sampler for Model Selection, On the Use of the Lasso for Instrumental Variables Estimation with Some Invalid Instruments, Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO, Unnamed Item, Unnamed Item, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Monitoring sequential structural changes in penalized high-dimensional linear models, On asymptotic risk of selecting models for possibly nonstationary time-series, The Dantzig selector: recovery of signal via ℓ 1 − αℓ 2 minimization, Variance estimation based on blocked 3×2 cross-validation in high-dimensional linear regression, Lassoing the HAR Model: A Model Selection Perspective on Realized Volatility Dynamics, Penalized and ridge-type shrinkage estimators in Poisson regression model, Thresholded spectral algorithms for sparse approximations, Univariate measurement error selection likelihood for variable selection of additive model, Pattern recovery and signal denoising by SLOPE when the design matrix is orthogonal, Oracle Efficient Estimation of Structural Breaks in Cointegrating Regressions