Statistics for high-dimensional data. Methods, theory and applications.

From MaRDI portal
Publication:532983

DOI10.1007/978-3-642-20192-9zbMath1273.62015OpenAlexW4247571494WikidataQ57256043 ScholiaQ57256043MaRDI QIDQ532983

Sara van de Geer, Peter Bühlmann

Publication date: 2 May 2011

Published in: Springer Series in Statistics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/978-3-642-20192-9



Related Items

On principal components regression, random projections, and column subsampling, Monotone splines Lasso, Simultaneous monitoring of process mean vector and covariance matrix via penalized likelihood estimation, Model selection consistency of Lasso for empirical data, On the post selection inference constant under restricted isometry properties, An alternating direction method of multipliers for MCP-penalized regression with high-dimensional data, Statistics for big data: a perspective, On Schott's and Mao's test statistics for independence of normal random vectors, Are discoveries spurious? Distributions of maximum spurious correlations and their applications, Distributed testing and estimation under sparse high dimensional models, Degrees of freedom for piecewise Lipschitz estimators, Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments, Robust and sparse estimators for linear regression models, High-dimensional inference for personalized treatment decision, Parsimonious and powerful composite likelihood testing for group difference and genotype-phenotype association, Variable selection for multiply-imputed data with penalized generalized estimating equations, Ridge estimation of inverse covariance matrices from high-dimensional data, Structured variable selection via prior-induced hierarchical penalty functions, A globally convergent algorithm for Lasso-penalized mixture of linear regression models, A forward and backward stagewise algorithm for nonconvex loss functions with adaptive Lasso, Confidence regions for entries of a large precision matrix, Discussion of ``On concentration for (regularized) empirical risk minimization by Sara van de Geer and Martin Wainwright, Sparse learning of the disease severity score for high-dimensional data, High-dimensional simultaneous inference with the bootstrap, Sparse nonparametric model for the detection of impact points of a functional variable, A penalized likelihood method for structural equation modeling, Estimating a sparse reduction for general regression in high dimensions, Bayesian additive regression trees using Bayesian model averaging, Inferring large graphs using \(\ell_1\)-penalized likelihood, Oracle inequalities for cross-validation type procedures, PAC-Bayesian estimation and prediction in sparse additive models, Non-asymptotic approach to varying coefficient model, Some optimality properties of FDR controlling rules under sparsity, Asymptotically honest confidence regions for high dimensional parameters by the desparsified conservative Lasso, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Fused Lasso penalized least absolute deviation estimator for high dimensional linear regression, Time-optimal hands-off control for linear time-invariant systems, A systematic review on model selection in high-dimensional regression, Covariance-insured screening, Prediction with a flexible finite mixture-of-regressions, Simultaneous nonparametric regression in RADWT dictionaries, Adaptive estimation of the sparsity in the Gaussian vector model, Recent advances in functional data analysis and high-dimensional statistics, ArCo: an artificial counterfactual approach for high-dimensional panel time-series data, Portal nodes screening for large scale social networks, Oracle inequalities for high-dimensional prediction, Minimax optimal estimation in partially linear additive models under high dimension, Variable screening for high dimensional time series, Convex and non-convex regularization methods for spatial point processes intensity estimation, Bayesian estimation of sparse signals with a continuous spike-and-slab prior, Column normalization of a random measurement matrix, ROCKET: robust confidence intervals via Kendall's tau for transelliptical graphical models, Robust low-rank matrix estimation, Debiasing the Lasso: optimal sample size for Gaussian designs, Overcoming the limitations of phase transition by higher order analysis of regularization techniques, Regularization and the small-ball method. I: Sparse recovery, Gaussian and bootstrap approximations for high-dimensional U-statistics and their applications, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, On the degrees of freedom of mixed matrix regression, Model selection and local geometry, Sparse inference of the drift of a high-dimensional Ornstein-Uhlenbeck process, Optimal estimation of slope vector in high-dimensional linear transformation models, Equivalent Lipschitz surrogates for zero-norm and rank optimization problems, Feature screening for nonparametric and semiparametric models with ultrahigh-dimensional covariates, Spline estimator for ultra-high dimensional partially linear varying coefficient models, Inference under Fine-Gray competing risks model with high-dimensional covariates, Learning from MOM's principles: Le Cam's approach, Partial penalized empirical likelihood ratio test under sparse case, Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations, Chained correlations for feature selection, Bayesian MIDAS penalized regressions: estimation, selection, and prediction, Sparse space-time models: concentration inequalities and Lasso, Lasso estimation for spherical autoregressive processes, Time-varying Lasso, High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}, On the sensitivity of the Lasso to the number of predictor variables, A cost-sensitive constrained Lasso, Sparse randomized shortest paths routing with Tsallis divergence regularization, Factor-adjusted multiple testing of correlations, Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions, Optimal sparsity testing in linear regression model, Adaptation bounds for confidence bands under self-similarity, Inference without compatibility: using exponential weighting for inference on a parameter of a linear model, Semiparametric efficiency bounds for high-dimensional models, On the exponentially weighted aggregate with the Laplace prior, Oracle posterior contraction rates under hierarchical priors, Multicarving for high-dimensional post-selection inference, Double fused Lasso regularized regression with both matrix and vector valued predictors, Iteratively reweighted \(\ell_1\)-penalized robust regression, Matrix optimization based Euclidean embedding with outliers, Gini correlation for feature screening, The de-biased group Lasso estimation for varying coefficient models, High-dimensional linear models: a random matrix perspective, Second-order Stein: SURE for SURE and other applications in high-dimensional inference, The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning, Variable selection consistency of Gaussian process regression, From multivariate to functional data analysis: fundamentals, recent developments, and emerging areas, Recent advances in shrinkage-based high-dimensional inference, Support vector machine classifiers by non-Euclidean margins, Maximum likelihood estimation of potential energy in interacting particle systems from single-trajectory data, Modeling association between multivariate correlated outcomes and high-dimensional sparse covariates: the adaptive SVS method, REMI: REGRESSION WITH MARGINAL INFORMATION AND ITS APPLICATION IN GENOME-WIDE ASSOCIATION STUDIES, Bayesian inference in high-dimensional linear models using an empirical correlation-adaptive prior, A New Principle for Tuning-Free Huber Regression, Regularized projection score estimation of treatment effects in high-dimensional quantile regression, Elastic-net Regularized High-dimensional Negative Binomial Regression: Consistency and Weak Signal Detection, Prior Knowledge Guided Ultra-High Dimensional Variable Screening With Application to Neuroimaging Data, Using Improved Robust Estimators to Semiparametric Model with High Dimensional Data, Integer constraints for enhancing interpretability in linear regression, ESTIMATION OF THE KRONECKER COVARIANCE MODEL BY QUADRATIC FORM, Switching Regression Models and Causal Inference in the Presence of Discrete Latent Variables, Sequential change-point detection in high-dimensional Gaussian graphical models, Fine-Grained Job Salary Benchmarking with a Nonparametric Dirichlet Process–Based Latent Factor Model, L0-Regularized Learning for High-Dimensional Additive Hazards Regression, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Parameter choices for sparse regularization with the ℓ1 norm *, An Alternating Method for Cardinality-Constrained Optimization: A Computational Study for the Best Subset Selection and Sparse Portfolio Problems, Forecasting Multiple Time Series With One-Sided Dynamic Principal Components, Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Projection-based Inference for High-dimensional Linear Models, The Spike-and-Slab LASSO, Structured sparse support vector machine with ordered features, Block-Diagonal Covariance Estimation and Application to the Shapley Effects in Sensitivity Analysis, Overlapping group lasso for high-dimensional generalized linear models, Sparsity identification in ultra-high dimensional quantile regression models with longitudinal data, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Variance estimation for sparse ultra-high dimensional varying coefficient models, Information theoretic limits of learning a sparse rule, Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models, A penalized approach to covariate selection through quantile regression coefficient models, A modified multinomial baseline logit model with logit functions having different covariates, Sparse group lasso for multiclass functional logistic regression models, Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression, High-dimensional dynamic systems identification with additional constraints, Unnamed Item, Asymptotic Theory of \(\boldsymbol \ell _1\) -Regularized PDE Identification from a Single Noisy Trajectory, A primal dual active set with continuation algorithm for high-dimensional nonconvex SICA-penalized regression, Unnamed Item, Unnamed Item, Unnamed Item, Optimal Sparse Linear Prediction for Block-missing Multi-modality Data Without Imputation, Online Decision Making with High-Dimensional Covariates, Independently Interpretable Lasso for Generalized Linear Models, A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models, An algorithm for the multivariate group lasso with covariance estimation, Hard thresholding regression, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Fixed Effects Testing in High-Dimensional Linear Mixed Models, Mixed-Effect Time-Varying Network Model and Application in Brain Connectivity Analysis, Kernel Meets Sieve: Post-Regularization Confidence Bands for Sparse Additive Model, A High‐dimensional Focused Information Criterion, Scalable Algorithms for the Sparse Ridge Regression, On the Improved Rates of Convergence for Matérn-Type Kernel Ridge Regression with Application to Calibration of Computer Models, ESTIMATION FOR THE PREDICTION OF POINT PROCESSES WITH MANY COVARIATES, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Continuous-time model identification: application on a behavioural (miLife) study, Classification Error of the Thresholded Independence Rule, Spectral gaps, symmetries and log-concave perturbations, Minimax Optimal Procedures for Locally Private Estimation, Error Variance Estimation in Ultrahigh-Dimensional Additive Models, An ADMM with continuation algorithm for non-convex SICA-penalized regression in high dimensions, A study on tuning parameter selection for the high-dimensional lasso, High-dimensional linear model selection motivated by multiple testing, Generalized Sobol sensitivity indices for dependent variables: numerical methods, Scale-Invariant Sparse PCA on High-Dimensional Meta-Elliptical Data, Spatially Varying Coefficient Model for Neuroimaging Data With Jump Discontinuities, Unnamed Item, The SgenoLasso and its cousins for selective genotyping and extreme sampling: application to association studies and genomic selection, Sharp Oracle Inequalities for Square Root Regularization, Generalized Conditional Gradient for Sparse Estimation, Gap Safe screening rules for sparsity enforcing penalties, Regularization and the small-ball method II: complexity dependent error rates, Unnamed Item, Nonsparse Learning with Latent Variables, A Mixed-Integer Fractional Optimization Approach to Best Subset Selection, Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms, Spatially multi-scale dynamic factor modeling via sparse estimation, The Convex Mixture Distribution: Granger Causality for Categorical Time Series, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, The Determinants of Planned Retirement Age of Informal Worker in Chiang Mai Province, Thailand, Singularity Structures and Impacts on Parameter Estimation in Finite Mixtures of Distributions, Penalized and constrained LAD estimation in fixed and high dimension, Learning nonlinear turbulent dynamics from partial observations via analytically solvable conditional statistics, A review on instance ranking problems in statistical learning, Mathematical foundations of machine learning. Abstracts from the workshop held March 21--27, 2021 (hybrid meeting), Adaptive estimation in multivariate response regression with hidden variables, Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors, High-dimensional sufficient dimension reduction through principal projections, Doubly robust semiparametric inference using regularized calibrated estimation with high-dimensional data, The asymptotic distribution of the MLE in high-dimensional logistic models: arbitrary covariance, The inverse problem for conducting defective lattices, A generative approach to modeling data with quantitative and qualitative responses, Thresholding tests based on affine Lasso to achieve non-asymptotic nominal level and high power under sparse and dense alternatives in high dimension, Likelihood theory for the graph Ornstein-Uhlenbeck process, Adaptive Huber regression on Markov-dependent data, Doubly debiased Lasso: high-dimensional inference under hidden confounding, Ridge regression revisited: debiasing, thresholding and bootstrap, Penalized least square in sparse setting with convex penalty and non Gaussian errors, Weighted Lasso estimates for sparse logistic regression: non-asymptotic properties with measurement errors, Bayesian factor-adjusted sparse regression, Horseshoe shrinkage methods for Bayesian fusion estimation, A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations, Large-scale multivariate sparse regression with applications to UK Biobank, Bayesian linear regression for multivariate responses under group sparsity, Adaptive risk bounds in univariate total variation denoising and trend filtering, Statistical inference for model parameters in stochastic gradient descent, Sparse high-dimensional regression: exact scalable algorithms and phase transitions, Nonconcave penalized estimation in sparse vector autoregression model, Asymptotics of estimators for nonparametric multivariate regression models with long memory, Hierarchical inference for genome-wide association studies: a view on methodology with software, Rejoinder on: ``Hierarchical inference for genome-wide association studies: a view on methodology with software, Inference for high-dimensional instrumental variables regression, Learning rates for partially linear functional models with high dimensional scalar covariates, Robust machine learning by median-of-means: theory and practice, Lasso guarantees for \(\beta \)-mixing heavy-tailed time series, Statistical analysis of sparse approximate factor models, Variable selection for sparse logistic regression, Bayesian fusion estimation via \(t\) shrinkage, Test of significance for high-dimensional longitudinal data, A general framework for Bayes structured linear models, Relaxing the assumptions of knockoffs by conditioning, Asymptotic risk and phase transition of \(l_1\)-penalized robust estimator, Invariance, causality and robustness, On Dantzig and Lasso estimators of the drift in a high dimensional Ornstein-Uhlenbeck model, High-dimensional predictive regression in the presence of cointegration, A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al., Ill-posed estimation in high-dimensional models with instrumental variables, Model selection in linear mixed-effect models, Statistical identification of important nodes in biological systems, High-dimensional central limit theorems by Stein's method, Bayesian model selection for high-dimensional Ising models, with applications to educational data, Controlling the false discovery rate for latent factors via unit-rank deflation, Pivotal estimation via square-root lasso in nonparametric regression, Lasso with long memory regression errors, Regularization method for predicting an ordinal response using longitudinal high-dimensional genomic data, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Sparse distance metric learning, Data science, big data and statistics, Comments on ``Data science, big data and statistics, Prediction error bounds for linear regression with the TREX, An improved variable selection procedure for adaptive Lasso in high-dimensional survival analysis, Quasi-Bayesian estimation of large Gaussian graphical models, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, Global and local two-sample tests via regression, Variable selection via adaptive false negative control in linear regression, Adaptive estimation of the rank of the coefficient matrix in high-dimensional multivariate response regression models, Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors, Multi-kernel unmixing and super-resolution using the modified matrix pencil method, Lasso meets horseshoe: a survey, Distributed simultaneous inference in generalized linear models via confidence distribution, Control-based algorithms for high dimensional online learning, Variable selection with spatially autoregressive errors: a generalized moments Lasso estimator, A review of Gaussian Markov models for conditional independence, Inference in partially identified models with many moment inequalities using Lasso, High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking, Asymptotic theory of the adaptive sparse group Lasso, Sharp oracle inequalities for low-complexity priors, Convergence rates of least squares regression estimators with heavy-tailed errors, Sampling from non-smooth distributions through Langevin diffusion, Compound Poisson point processes, concentration and oracle inequalities, Composite versus model-averaged quantile regression, Optimal linear discriminators for the discrete choice model in growing dimensions, Tuning parameter calibration for personalized prediction in medicine, Regret lower bound and optimal algorithm for high-dimensional contextual linear bandit, High-dimensional inference for linear model with correlated errors, Asymptotic linear expansion of regularized M-estimators, Distributed adaptive Huber regression, A convex programming solution based debiased estimator for quantile with missing response and high-dimensional covariables, Confidence intervals for parameters in high-dimensional sparse vector autoregression, Pan-disease clustering analysis of the trend of period prevalence, Effective model calibration via sensible variable identification and adjustment with application to composite fuselage simulation, Estimation of nonparametric regression models by wavelets, Sparse spatially clustered coefficient model via adaptive regularization, Penalized relative error estimation of functional multiplicative regression models with locally sparse properties, Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses, Penalised likelihood methods for phase-type dimension selection, Computation for latent variable model estimation: a unified stochastic proximal framework, A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks, Item response thresholds models: a general class of models for varying types of items, High dimensional generalized linear models for temporal dependent data, A computationally efficient and flexible algorithm for high dimensional mean and covariance matrix change point models, A smoothing iterative method for quantile regression with nonconvex \(\ell_p\) penalty, Stable prediction in high-dimensional linear models, Convex optimization learning of faithful Euclidean distance representations in nonlinear dimensionality reduction, Nonasymptotic approach to Bayesian semiparametric inference, Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels, Classification of longitudinal data through a semiparametric mixed‐effects model based on lasso‐type estimators, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, Sensitivity Analysis for Mirror-Stratifiable Convex Functions, Regularization and discretization error estimates for optimal control of ODEs with group sparsity, Portfolio optimization under Expected Shortfall: contour maps of estimation error, Score test variable screening, Oracle Estimation of a Change Point in High-Dimensional Quantile Regression, Conditional sure independence screening by conditional marginal empirical likelihood, Frequentist validity of Bayesian limits, Sparsest representations and approximations of an underdetermined linear system, Using penalized likelihood to select parameters in a random coefficients multinomial logit model, Augmented factor models with applications to validating market risk factors and forecasting bond risk premia, The diffusion geometry of fibre bundles: horizontal diffusion maps, Variable selection and structure identification for varying coefficient Cox models, Nonparametric estimation and inference under shape restrictions, Generalized canonical correlation variables improved estimation in high dimensional seemingly unrelated regression models, A primal Douglas-Rachford splitting method for the constrained minimization problem in compressive sensing, Regularized estimation in sparse high-dimensional multivariate regression, with application to a DNA methylation study, The degrees of freedom of partly smooth regularizers, Globally sparse and locally dense signal recovery for compressed sensing, Nonparametric C- and D-vine-based quantile regression, Accuracy assessment for high-dimensional linear regression, Multiple testing of conditional independence hypotheses using information-theoretic approach, High-dimensional robust regression with \(L_q\)-loss functions, Generic error bounds for the generalized Lasso with sub-exponential data, On the penalized maximum likelihood estimation of high-dimensional approximate factor model, Model-based regression clustering for high-dimensional data: application to functional data, Predictor ranking and false discovery proportion control in high-dimensional regression, PAC-Bayesian risk bounds for group-analysis sparse regression by exponential weighting, Regularized learning schemes in feature Banach spaces, Some improved estimation strategies in high-dimensional semiparametric regression models with application to riboflavin production data, Lasso regression in sparse linear model with \(\varphi\)-mixing errors, Variable selection procedures from multiple testing, Misspecified nonconvex statistical optimization for sparse phase retrieval, Double-estimation-friendly inference for high-dimensional misspecified models, Lasso adjustments of treatment effect estimates in randomized experiments, Robust and Sparse Estimation of the Inverse Covariance Matrix Using Rank Correlation Measures, Cluster feature selection in high-dimensional linear models, High-dimensional estimation with geometric constraints: Table 1., Goodness-of-Fit Tests for High Dimensional Linear Models, Autoregressive models for gene regulatory network inference: sparsity, stability and causality issues, Variable selection in Cox regression models with varying coefficients, Calibrating nonconvex penalized regression in ultra-high dimension, On the sparsity of Lasso minimizers in sparse data recovery, Penalized estimation in high-dimensional hidden Markov models with state-specific graphical models, Ultrahigh dimensional variable selection through the penalized maximum trimmed likelihood estimator, Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors, Confidence sets in sparse regression, Noisy low-rank matrix completion with general sampling distribution, Statistical inference in compound functional models, Adaptive robust variable selection, Oracle Inequalities for Local and Global Empirical Risk Minimizers, Replica approach to mean-variance portfolio optimization, Analytic solution to variance optimization with no short positions, A statistical mechanics approach to de-biasing and uncertainty estimation in LASSO for random measurements, Adaptive Huber Regression, RANK: Large-Scale Inference With Graphical Nonlinear Knockoffs, L2RM: Low-Rank Linear Regression Models for High-Dimensional Matrix Responses, On b-bit min-wise hashing for large-scale regression and classification with sparse data, Unnamed Item, Unnamed Item, Randomized pick-freeze for sparse Sobol indices estimation in high dimension, Nonparametric Statistics and High/Infinite Dimensional Data, Endogeneity in high dimensions, Estimation in High Dimensions: A Geometric Perspective, Maximin effects in inhomogeneous large-scale data, Nonconvex sorted \(\ell_1\) minimization for sparse approximation, Bayesian Block-Diagonal Predictive Classifier for Gaussian Data, Concentration Inequalities for Statistical Inference, LIQUIDITY RISK AND INSTABILITIES IN PORTFOLIO OPTIMIZATION, Gradient-based Regularization Parameter Selection for Problems With Nonsmooth Penalty Functions, Efficient Threshold Selection for Multivariate Total Variation Denoising, Testing Sparsity-Inducing Penalties, Sparse nonparametric model for regression with functional covariate, Computational and statistical analyses for robust non-convex sparse regularized regression problem, Tuning parameter calibration for \(\ell_1\)-regularized logistic regression, False discovery control for penalized variable selections with high-dimensional covariates, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, Adaptively weighted group Lasso for semiparametric quantile regression models, On the asymptotic variance of the debiased Lasso, High-dimensional generalized linear models incorporating graphical structure among predictors, Non-separable models with high-dimensional data, Weaker regularity conditions and sparse recovery in high-dimensional regression, Small-deviation inequalities for sums of random matrices, Sparsistency and agnostic inference in sparse PCA, Structured estimation for the nonparametric Cox model, High-dimensional Ising model selection with Bayesian information criteria, On model selection consistency of regularized M-estimators, Estimation of positive definite \(M\)-matrices and structure learning for attractive Gaussian Markov random fields, Minimax-optimal nonparametric regression in high dimensions, Lasso and probabilistic inequalities for multivariate point processes, Preconditioning the Lasso for sign consistency, Sparse learning via Boolean relaxations, Comparison and anti-concentration bounds for maxima of Gaussian random vectors, A group VISA algorithm for variable selection, Comments on: ``A random forest guided tour, High-dimensional proportionality test of two covariance matrices and its application to gene expression data, Achieving the oracle property of OEM with nonconvex penalties, The robust desparsified lasso and the focused information criterion for high-dimensional generalized linear models, Markov Neighborhood Regression for High-Dimensional Inference, On Multiplier Processes Under Weak Moment Assumptions, Partial Least Squares for Heterogeneous Data, Guns and Suicides, On Mixture Alternatives and Wilcoxon’s Signed-Rank Test, Automatic Component Selection in Additive Modeling of French National Electricity Load Forecasting, On the Behavior of the Risk of a LASSO-Type Estimator, A Note on High-Dimensional Linear Regression With Interactions, Conditional predictive inference for stable algorithms, Nonparametric Estimation of Galaxy Cluster Emissivity and Detection of Point Sources in Astrophysics With Two Lasso Penalties, Forward variable selection for ultra-high dimensional quantile regression models, Grouped variable selection with discrete optimization: computational and statistical perspectives, Robust sure independence screening for nonpolynomial dimensional generalized linear models, Bayesian elastic net based on empirical likelihood, High-dimensional sparse index tracking based on a multi-step convex optimization approach, Byzantine-robust distributed sparse learning for \(M\)-estimation, Quantile regression of ultra-high dimensional partially linear varying-coefficient model with missing observations, Two-stage penalized algorithms via integrating prior information improve gene selection from omics data, Inference for Nonparanormal Partial Correlation via Regularized Rank-Based Nodewise Regression, Jackknife model averaging for high‐dimensional quantile regression, Efficient multiple change point detection for high‐dimensional generalized linear models, Sampling distributions of optimal portfolio weights and characteristics in small and large dimensions, Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm, Assessment of Covariance Selection Methods in High-Dimensional Gaussian Graphical Models, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Automatic bias correction for testing in high‐dimensional linear models, Simplex-based Multinomial Logistic Regression with Diverging Numbers of Categories and Covariates, Regularization in dynamic random‐intercepts models for analysis of longitudinal data, On the selection of predictors by using greedy algorithms and information theoretic criteria, Discussion of “A Scale-Free Approach for False Discovery Rate Control in Generalized Linear Models”, Inference for High-Dimensional Exchangeable Arrays, Orthogonalized Kernel Debiased Machine Learning for Multimodal Data Analysis, Debiased lasso for generalized linear models with a diverging number of covariates, Sparse and Low-Rank Matrix Quantile Estimation With Application to Quadratic Regression, Sparse estimation in high-dimensional linear errors-in-variables regression via a covariate relaxation method, Variable selection in linear-circular regression models, An efficient semi-proximal ADMM algorithm for low-rank and sparse regularized matrix minimization problems with real-world applications, Double bias correction for high-dimensional sparse additive hazards regression with covariate measurement errors, Controlling False Discovery Rate Using Gaussian Mirrors, Sparse regression for low-dimensional time-dynamic varying coefficient models with application to air quality data, Testing stochastic dominance with many conditioning variables, Sparse quantile regression, Analytic approach to variance optimization under an \(\mathcal{l}_1\) constraint, Neuronized Priors for Bayesian Sparse Linear Regression, Individual Data Protected Integrative Regression Analysis of High-Dimensional Heterogeneous Data, A sparse additive model for high-dimensional interactions with an exposure variable, Identification of survival relevant genes with measurement error in gene expression incorporated, Tuning parameter selection for penalized estimation via \(R^2\), Model averaging for support vector classifier by cross-validation, Adaptive denoising of signals with local shift-invariant structure, The Lasso with structured design and entropy of (absolute) convex hulls, On accuracy of Gaussian approximation in Bayesian semiparametric problems, An ensemble EM algorithm for Bayesian variable selection, Variational Bayes for High-Dimensional Linear Regression With Sparse Priors, Unnamed Item, Unnamed Item, UNIFORM INFERENCE IN HIGH-DIMENSIONAL DYNAMIC PANEL DATA MODELS WITH APPROXIMATELY SPARSE FIXED EFFECTS, On the characterizations of solutions to perturbed l1 conic optimization problem, Integrative Analysis of Cancer Diagnosis Studies with Composite Penalization, Overfitting, generalization, and MSE in class probability estimation with high‐dimensional data, Score-based causal learning in additive noise models, The Lasso for High Dimensional Regression with a Possible Change Point, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Variance-based regularization with convex objectives, Adaptive Bayesian density regression for high-dimensional data, Quasi-likelihood and/or robust estimation in high dimensions, A selective review of group selection in high-dimensional models, A general theory of concave regularization for high-dimensional sparse estimation problems, Performance bounds for parameter estimates of high-dimensional linear models with correlated errors, Rejoinder, Comments on: ``Probability enhanced effective dimension reduction for classifying sparse functional data, Discussion of: ``Grouping strategies and thresholding for high dimension linear models, Simple expressions of the LASSO and SLOPE estimators in low-dimension, Simulation-based Value-at-Risk for nonlinear portfolios, Skinny Gibbs: A Consistent and Scalable Gibbs Sampler for Model Selection, Bayesian Regularization for Graphical Models With Unequal Shrinkage, Sparse Minimum Discrepancy Approach to Sufficient Dimension Reduction with Simultaneous Variable Selection in Ultrahigh Dimension, On the Use of the Lasso for Instrumental Variables Estimation with Some Invalid Instruments, The Partial Linear Model in High Dimensions, Learning rates for partially linear support vector machine in high dimensions, Semi-analytic approximate stability selection for correlated data in generalized linear models, Variable Selection Methods in High-dimensional Regression—A Simulation Study, Towards Statistically Provable Geometric 3D Human Pose Recovery, Multiscale Change Point Inference, An upper bound for functions of estimators in high dimensions, Robust Tensor Completion: Equivalent Surrogates, Error Bounds, and Algorithms, A Unifying Tutorial on Approximate Message Passing, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets, The Risk of James–Stein and Lasso Shrinkage, Lassoing the HAR Model: A Model Selection Perspective on Realized Volatility Dynamics, Lassoing the Determinants of Retirement, Adaptive LASSO estimation for ARDL models with GARCH innovations, Optimization by Gradient Boosting, For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability, Nonconcave penalized M-estimation for the least absolute relative errors model, Scalable Bayesian high-dimensional local dependence learning, High-dimensional robust inference for censored linear models, The EAS approach to variable selection for multivariate response data in high-dimensional settings, Weighted l1‐Penalized Corrected Quantile Regression for High‐Dimensional Temporally Dependent Measurement Errors, A power analysis for Model-X knockoffs with \(\ell_p\)-regularized statistics, A General Framework for Identifying Hierarchical Interactions and Its Application to Genomics Data, Sparse principal component analysis for high‐dimensional stationary time series, Online inference in high-dimensional generalized linear models with streaming data, Inference for high‐dimensional linear models with locally stationary error processes, Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design, Retire: robust expectile regression in high dimensions, Nearly Dimension-Independent Sparse Linear Bandit over Small Action Spaces via Best Subset Selection, Analyzing evidence-based falls prevention data with significant missing information using variable selection after multiple imputation, Decomposition of dynamical signals into jumps, oscillatory patterns, and possible outliers, Forward-selected panel data approach for program evaluation, Cross-Fitted Residual Regression for High-Dimensional Heteroscedasticity Pursuit, Online Debiasing for Adaptively Collected High-Dimensional Data With Applications to Time Series Analysis, First-order methods for convex optimization, A comparative study on high-dimensional bayesian regression with binary predictors, Quantitative robustness of instance ranking problems, A Corrected Tensor Nuclear Norm Minimization Method for Noisy Low-Rank Tensor Completion, Locally Sparse Function-on-Function Regression, A LINEAR-PROGRAMMING PORTFOLIO OPTIMIZER TO MEAN–VARIANCE OPTIMIZATION, Low-rank matrix estimation via nonconvex optimization methods in multi-response errors-in-variables regression, On lower bounds for the bias-variance trade-off, Robust high-dimensional tuning free multiple testing, The Lasso with general Gaussian designs with applications to hypothesis testing, High-dimensional composite quantile regression: optimal statistical guarantees and fast algorithms, Rank-Based Greedy Model Averaging for High-Dimensional Survival Data, -Penalized Pairwise Difference Estimation for a High-Dimensional Censored Regression Model, Inference in a Class of Optimization Problems: Confidence Regions and Finite Sample Bounds on Errors in Coverage Probabilities, Nonparametric Prediction Distribution from Resolution-Wise Regression with Heterogeneous Data, Sparse and simple structure estimation via prenet penalization, Estimation and inference in sparse multivariate regression and conditional Gaussian graphical models under an unbalanced distributed setting, Unnamed Item, Greedy algorithms for prediction, An introduction to recent advances in high/infinite dimensional statistics, Direct shrinkage estimation of large dimensional precision matrix, Worst possible sub-directions in high-dimensional models, Optimal sampling designs for nonparametric estimation of spatial averages of random fields, Robust methods for inferring sparse network structures, On cross-validated Lasso in high dimensions, Interquantile shrinkage and variable selection in quantile regression, Significance testing in non-sparse high-dimensional linear models, On the prediction loss of the Lasso in the partially labeled setting, D-learning to estimate optimal individual treatment rules, The main contributions of robust statistics to statistical science and a new challenge, Local linear smoothing for sparse high dimensional varying coefficient models, Forward variable selection for sparse ultra-high-dimensional generalized varying coefficient models, Notes on the dimension dependence in high-dimensional central limit theorems for hyperrectangles, Regularity properties for sparse regression, Local independence feature screening for nonparametric and semiparametric models by marginal empirical likelihood, Best subset selection via a modern optimization lens, High dimensional regression for regenerative time-series: an application to road traffic modeling, Variable selection in multivariate linear models for functional data via sparse regularization, Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable, Model selection for factorial Gaussian graphical models with an application to dynamic regulatory networks, The use of vector bootstrapping to improve variable selection precision in Lasso models, The benefit of group sparsity in group inference with de-biased scaled group Lasso, Geometric inference for general high-dimensional linear inverse problems, Solution of linear ill-posed problems using overcomplete dictionaries, Oracle inequalities, variable selection and uniform inference in high-dimensional correlated random effects panel data models, Econometric estimation with high-dimensional moment equalities, A rank-corrected procedure for matrix completion with fixed basis coefficients, Fusion of hard and soft information in nonparametric density estimation, PBoostGA: pseudo-boosting genetic algorithm for variable ranking and selection, Sub-optimality of some continuous shrinkage priors, \(\ell_{0}\)-penalized maximum likelihood for sparse directed acyclic graphs, Statistical significance in high-dimensional linear models, The geometry of least squares in the 21st century, The Bernstein-Orlicz norm and deviation inequalities, Marginal empirical likelihood and sure independence feature screening, Bayesian regression based on principal components for high-dimensional data, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Correlated variables in regression: clustering and sparse estimation, Discussion of ``Correlated variables in regression: clustering and sparse estimation, Bayesian linear regression with sparse priors, Functional additive regression, A new test of independence for high-dimensional data, Alignment based kernel learning with a continuous set of base kernels, Model selection and structure specification in ultra-high dimensional generalised semi-varying coefficient models, Lasso-type estimators for semiparametric nonlinear mixed-effects models estimation, \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors, Prediction consistency of forward iterated regression and selection technique, On the strong convergence of the optimal linear shrinkage estimator for large dimensional covariance matrix, Selection of tuning parameters in bridge regression models via Bayesian information criterion, New concentration inequalities for suprema of empirical processes, Does modeling lead to more accurate classification? A study of relative efficiency in linear classification, A new perspective on least squares under convex constraint, CAM: causal additive models, high-dimensional order search and penalized regression, \(L_1\)-penalization in functional linear regression with subgaussian design, On higher order isotropy conditions and lower bounds for sparse quadratic forms, Normalized and standard Dantzig estimators: two approaches, High-dimensional inference in misspecified linear models, Operator-valued kernel-based vector autoregressive models for network inference, On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property, Oracle inequalities for high dimensional vector autoregressions, A flexible semiparametric forecasting model for time series, Weighted \(\ell_1\)-penalized corrected quantile regression for high dimensional measurement error models, Robust inference on average treatment effects with possibly more covariates than observations, Toward a unified theory of sparse dimensionality reduction in Euclidean space, Smooth predictive model fitting in regression, Consistent learning by composite proximal thresholding, Interpreting latent variables in factor models via convex optimization, Additive model selection, Robust matrix completion, Finding a low-rank basis in a matrix subspace, Sparse recovery under weak moment assumptions, Sparse clustering of functional data, Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error, Sparse classification with paired covariates, Model-assisted inference for treatment effects using regularized calibrated estimation with high-dimensional data, On asymptotically optimal confidence regions and tests for high-dimensional models, Lasso Inference for High-Dimensional Time Series, Variable selection in discrete survival models including heterogeneity, Selection by partitioning the solution paths, Gelfand numbers related to structured sparsity and Besov space embeddings with small mixed smoothness, Double Machine Learning for Partially Linear Mixed-Effects Models with Repeated Measurements, Regularizing Double Machine Learning in Partially Linear Endogenous Models, Confidence intervals for high-dimensional inverse covariance estimation, The Hardness of Conditional Independence Testing and the Generalised Covariance Measure, Geometric median and robust estimation in Banach spaces, Estimation of high-dimensional graphical models using regularized score matching, An Automated Approach Towards Sparse Single-Equation Cointegration Modelling, A Projection Based Conditional Dependence Measure with Applications to High-dimensional Undirected Graphical Models, Main effects and interactions in mixed and incomplete data frames, Honest confidence regions and optimality in high-dimensional precision matrix estimation, Joint variable and rank selection for parsimonious estimation of high-dimensional matrices, Entropy and sampling numbers of classes of ridge functions, Sample size determination for training cancer classifiers from microarray and RNA-seq data, Sparse hierarchical regression with polynomials, Model selection in linear mixed models, Orthogonal one step greedy procedure for heteroscedastic linear models, A focused information criterion for graphical models in fMRI connectivity with high-dimensional data, Estimation of a delta-contaminated density of a random intensity of Poisson data