Some Comments on C P

From MaRDI portal
Publication:5686827

DOI10.2307/1267380zbMath0269.62061OpenAlexW4246048519MaRDI QIDQ5686827

Colin L. Mallows

Publication date: 1973

Published in: Technometrics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.2307/1267380



Related Items

Optimal model averaging estimator for multinomial logit models, Model averaging for generalized linear models in fragmentary data prediction, What are the Most Important Statistical Ideas of the Past 50 Years?, Model Selection and Regression t-Statistics, Selection of dimension and basis for density estimation and selection of dimension, basis and error distribution for regression, A variable selection proposal for multiple linear regression analysis, Model Selection of Generalized Estimating Equation With Divergent Model Size, Frequentist Model Averaging for the Nonparametric Additive Model, Learning spectral windowing parameters for regularization using unbiased predictive risk and generalized cross validation techniques for multiple data sets, Bayesian variable selection for linear regression with the κ-G priors, Forecasting vector autoregressions with mixed roots in the vicinity of unity, Bootstrap selection of ridge regularization parameter: a comparative study via a simulation study, Economic variable selection, Asymptotic Optimality of Cp-Type Criteria in High-Dimensional Multivariate Linear Regression Models, Penalized wavelet nonparametric univariate logistic regression for irregular spaced data, Sequential change point detection for high‐dimensional data using nonconvex penalized quantile regression, Adaptive and efficient estimation in the Gaussian sequence model, Bootstrapping some GLM and survival regression variable selection estimators, Unnamed Item, On the robustness of Mallows’ Cp criterion, Robust Model Averaging Method Based on LOF Algorithm, A nonparametric predictive regression model using partitioning estimators based on Taylor expansions, An Approximated Collapsed Variational Bayes Approach to Variable Selection in Linear Regression, Multifold Cross-Validation Model Averaging for Generalized Additive Partial Linear Models, Model averaging prediction by \(K\)-fold cross-validation, Criterion constrained Bayesian hierarchical models, Kernel regression for estimating regression function and its derivatives with unknown error correlations, A Generalization Gap Estimation for Overparameterized Models via the Langevin Functional Variance, Optimal Model Averaging of Mixed-Data Kernel-Weighted Spline Regressions, Tuning parameter selection for nonparametric derivative estimation in random design, Optimal model averaging for semiparametric partially linear models with measurement errors, Variable Selection for Heteroscedastic Data Through Variance Estimation, Stochastic variable selection strategies for zero-inflated models, Variable Selection for Marginal Longitudinal Generalized Linear Models, Practical Absorbing Boundary Conditions for Wave Propagation on Arbitrary Domain, Visual research on the trustability of classical variable selection methods in Cox regression, A simultaneous estimation and variable selection rule, A Prediction-Oriented Criterion for Choosing the Biasing Parameter in Liu Estimation, Adaptive smoothing methods for frequency-function estimation, An investigation of model selection criteria for neural network time series forecasting, Estimating the accuracy of (local) cross-validation via randomised GCV choices in kernel or smoothing spline regression, Variable selection in partially linear wavelet models, Convergence in probability of the Mallows and GCV wavelet and Fourier regularization methods, Prediction risk for the horseshoe regression, The General Expressions for the Moments of the Stochastic Shrinkage Parameters of the Liu Type Estimator, A simple and effective bandwidth selector for local polynomial quasi-likelihood regression, Statistical significance of the Netflix challenge, IMPROVED ESTIMATORS OF BREGMAN DIVERGENCE FOR MODEL SELECTION IN SMALL SAMPLES, Introduction to the special issue on sparsity and regularization methods, High-dimensional regression with unknown variance, A general theory of concave regularization for high-dimensional sparse estimation problems, Likelihood adaptively modified penalties, Variable selection in regression using maximal correlation and distance correlation, Local influence in ridge semiparametric models, Two-stage signal restoration based on a modified median filter, Cross validation of ridge regression estimator in autocorrelated linear regression models, Order Selection and Inference with Long Memory Dependent Data, Improved robust model selection methods for a Lévy nonparametric regression in continuous time, Excess Optimism: How Biased is the Apparent Error of an Estimator Tuned by SURE?, Weighing asset pricing factors: a least squares model averaging approach, Quadratic Inference Functions for Varying‐Coefficient Models with Longitudinal Data, Histogram selection in non Gaussian regression, Model selection for factor analysis: Some new criteria and performance comparisons, Multistep forecast selection for panel data, Best-subset model selection based on multitudinal assessments of likelihood improvements, Generalized Least Squares Model Averaging, Model selection and model averaging for matrix exponential spatial models, We Stand on the Shoulders of Giants—Pioneers of Statistics in Industry, Discussion of: ``The power of monitoring: how to make the most of a contaminated multivariate sample, Automatic bandwidth selection for modified m-smoother, Information criteria in classification: new divergence-based classifiers, Group regularization for zero-inflated poisson regression models with an application to insurance ratemaking, Prediction intervals for GLMs, GAMs, and some survival regression models, Ridge regression and the Lasso: how do they do as finders of significant regressors and their multipliers?, An Adaptive Estimation of Dimension Reduction Space, ADAPTIVE ESTIMATION IN A HETEROSCEDASTIC NONPARAMETRIC REGRESSION, Model selection for (auto-)regression with dependent data, Covariate-Adjusted Reference Intervals for Diagnostic Data, A regression approach to the two-dataset problem, Kernel Liu prediction approach in partially linear mixed measurement error models, Influential subsets on the variable selection, Another look at subset selection using linear least squares, Optimum shrinkage parameter selection for ridge type estimator of Tobit model, Density Deconvolution With Additive Measurement Errors Using Quadratic Programming, Adaptivity and Oracle Inequalities in Linear Statistical Inverse Problems: A (Numerical) Survey, Nonlinear Variable Selection via Deep Neural Networks, Estimation and Inference of Heterogeneous Treatment Effects using Random Forests, High leverage points and vertical outliers resistant model selection in regression, A measure of post variable selection error in multiple linear regression, and its estimation, Adaptive LASSO for linear mixed model selection via profile log-likelihood, An integrated dominance analysis and dynamic programing approach for measuring predictor importance for customer satisfaction, Consistent and robust variable selection in regression based on Wald test, Model selection in linear regression using paired bootstrap, A Predictive Approach for Selection of Diffusion Index Models, An Introduction to Coding Theory and the Two-Part Minimum Description Length Principle, Trading Variance Reduction with Unbiasedness: The Regularized Subspace Information Criterion for Robust Model Selection in Kernel Regression, Truncation level selection in nonparametric regression using Padé approximation, A robust sparse linear approach for contaminated data, Stochastic restricted Liu predictors in linear mixed models, A further prediction method in linear mixed models: Liu prediction, Marginal ridge conceptual predictive model selection criterion in linear mixed models, Consistent model selection procedure for general integer-valued time series, Least squares orthogonal polynomial regression estimation for irregular design, An R package for model fitting, model selection and the simulation for longitudinal data with dropout missingness, Modeling service-time distributions for queueing network simulation, Post-selection inference of generalized linear models based on the lasso and the elastic net, Information criteria in identifying regression models, Model averaging, asymptotic risk, and regressor groups, Simultaneous selection and estimation for the some zeros family of normal models, LASSO order selection for sparse autoregression: a bootstrap approach, Forecasting time series of economic processes by model averaging across data frames of various lengths, Model selection via conditional conceptual predictive statistic under ridge regression in linear mixed models, The rd class estimator in generalized linear models: applications on gamma, Poisson and binomial distributed responses, Robust feature screening procedures for single and mixed types of data, Efficient sparse portfolios based on composite quantile regression for high-dimensional index tracking, Second Order Asymptotic Risks of Smoothed Linear Estimators in Nonparametric Regression Models, On finite-sample properties of adaptive least squares regression estimates, The asymptotic average squared error for polynomial regression, A note on estimating the msep in nonlinear regression, Non-nested linear models: A conditional confidence approach, A parameter choice rule for Tikhonov regularization based on predictive risk, Selecting estimators and variables in the seemingly unrelated regression model, Fast and approximate exhaustive variable selection for generalised linear models with APES, Unnamed Item, Unnamed Item, Model selection by multiple test procedures, On the advantages of the non-concave penalized likelihood model selection method with minimum prediction errors in large-scale medical studies, Extended Bayesian model averaging for heritability in twin studies, Selection of a stroke risk model based on transcranial Doppler ultrasound velocity, A generalized quantile regression model, Sulla regolarizzazione mediante la «Generalized cross validation»: Una appliczione al caso dei polinomi ortogonali, Conditional conceptual predictive statistic for mixed model selection, An evaluation of ridge estimator in linear mixed models: an example from kidney failure data, Robust regression: an inferential method for determining which independent variables are most important, Computational efficiency in all possible regressions, Variable selection in proportional hazards cure model with time-varying covariates, application to US bank failures, Some characteristics on the selection of spline smoothing parameter, Unnamed Item, Unnamed Item, A Bayesian approach with generalized ridge estimation for high-dimensional regression and testing, Exact risk approaches to smoothing parameter selection, Model Selection for Generalized Estimating Equations Accommodating Dropout Missingness, Two-stage model selection procedures in partially linear regression, Additive nonparametric regression on principal components, Robust estimation for boundary correction in wavelet regression, Unnamed Item, A polynomial algorithm for best-subset selection problem, Regression Model Selection—A Residual Likelihood Approach, MODEL SELECTION FOR PENALIZED SPLINE SMOOTHING USING AKAIKE INFORMATION CRITERIA, A new class of blased estimate in linear regression, Diagnosis of Multivariate Control Chart Signal Based on Dummy Variable Regression Technique, Automatic Smoothing for Poisson Regression, The use of probabilistic models to produce optimal graphical displays of high-dimensional data sets, Regression and Contrast Estimates Based on Adaptive Regressograms Depending on Qualitative Explanatory Variables, Properties of the Sieve Bootstrap for Fractionally Integrated and Non-Invertible Processes, Order Determination in Nonlinear Time Series by Penalized Least-Squares, Variable selection for longitudinal data with high-dimensional covariates and dropouts, A study on tuning parameter selection for the high-dimensional lasso, Estimator of prediction error based on approximate message passing for penalized linear regression, ASYMPTOTICALLY EFFICIENT MODEL SELECTION FOR PANEL DATA FORECASTING, A MORE GENERAL CRITERION FOR SUBSET SELECTION IN MULTIPLE LINEAR REGRESSION, A MORE GENERAL CRITERION FOR SUBSET SELECTION IN MULTIPLE LINEAR REGRESSION, Fast Stable Direct Fitting and Smoothness Selection for Generalized Additive Models, MINIMIZING AVERAGE RISK IN REGRESSION MODELS, AVERAGING ESTIMATORS FOR REGRESSIONS WITH A POSSIBLE STRUCTURAL BREAK, Selection of regressors in econometrics: parametric and nonparametric methods selection of regressors in econometrics, Convergence rates of the generalized information criterion, Test of Significance in order selection, Nonparametric Wavelet Regression for Binary Response, Consistent covariate selection and post model selection inference in semiparametric regression., Nonconcave penalized likelihood with a diverging number of parameters., Least angle regression. (With discussion), Statistical properties of the method of regularization with periodic Gaussian reproducing kernel, Corrected versions of cross-validation criteria for selecting multivariate regression and growth curve models, A generalized correlated \(C_p\) criterion for derivative estimation with dependent errors, Robust model selection with covariables missing at random, Degrees of freedom for off-the-grid sparse estimation, A note on smoothing parameter selection for penalized spline smoothing, Mallows model averaging with effective model size in fragmentary data prediction, Evaluating the impact of exploratory procedures in regression prediction: A pseudosample approach, Twenty-one ML estimators for model selection, Objective Bayesian group variable selection for linear model, On improvability of model selection by model averaging, Nonlinear black-box models in system identification: Mathematical foundations, Information criteria for selecting possibly misspecified parametric models, On the choice between sample selection and two-part models, A new philosophy for model selection and performance estimation of data-based approximate mappings, Smoothing spline ANOVA for exponential families, with application to the Wisconsin epidemiological study of diabetic retinopathy. (The 1994 Neyman Memorial Lecture), Empirical risk minimization as parameter choice rule for general linear regularization methods, Computing the degrees of freedom of rank-regularized estimators and cousins, A fast and consistent variable selection method for high-dimensional multivariate linear regression with a large number of explanatory variables, Consistent model selection criteria and goodness-of-fit test for common time series models, Efficient robust nonparametric estimation in a semimartingale regression model, Non-local methods with shape-adaptive patches (NLM-SAP), Best subset selection via cross-validation criterion, Optimization of ridge parameters in multivariate generalized ridge regression by plug-in methods, Prediction error after model search, Model selection: from theory to practice, Asymptotic optimality of the nonnegative garrote estimator under heteroscedastic errors, Model averaging for varying-coefficient partially linear measurement error models, Adaptive-modal Bayesian nonparametric regression, Selecting the length of a principal curve within a Gaussian model, Model selection in regression under structural constraints, Optimal model selection in heteroscedastic regression using piecewise polynomial functions, Robust VIF regression with application to variable selection in large data sets, Simultaneous estimation of the mean and the variance in heteroscedastic Gaussian regression, Model selection by resampling penalization, Self-concordant analysis for logistic regression, PAC-Bayesian bounds for sparse regression estimation with exponential weights, Selection of model selection criteria for multivariate ridge regression, A robust and efficient estimation and variable selection method for partially linear models with large-dimensional covariates, When and when not to use optimal model averaging, A small-sample correction for the Schwarz SIC model selection criterion., A large-sample model selection criterion based on Kullback's symmetric divergence, Comparative analysis for robust penalized spline smoothing methods, The optimal selection for restricted linear models with average estimator, Bandwidth selection: Classical or plug-in?, Optimal model averaging estimator for semi-functional partially linear models, Subset selection in multiple linear regression in the presence of outlier and multicollinearity, An introduction to the Bayes information criterion: theoretical foundations and interpretation, Ridge parameters optimization based on minimizing model selection criterion in multivariate generalized ridge regression, From data to stochastic models, Gaussian linear model selection in a dependent context, Generalized ridge estimator and model selection criteria in multivariate linear regression, Optimal bounds for aggregation of affine estimators, Debiasing the Lasso: optimal sample size for Gaussian designs, On the degrees of freedom of mixed matrix regression, Adaptive LASSO for selecting Fourier coefficients in a functional smooth time-varying cointegrating regression: an application to the Feldstein-Horioka puzzle, Adaptive estimation of mean and volatility functions in (auto-)regressive models., A new algorithm for fixed design regression and denoising, A GIC rule for assessing data transformation in regression, Estimating the expectation of the Log-likelihood with censored data for estimator selection, Automated parameter selection for total variation minimization in image restoration, Nonlinear system identification via direct weight optimization, Model selection for the robust efficient signal processing observed with small Lévy noise, Efficient estimation of a semiparametric partially linear varying coefficient model, Shrinkage for categorical regressors, Model averaging prediction for time series models with a diverging number of parameters, Simultaneous feature selection and clustering based on square root optimization, Decision-based model selection, Mallows criterion for heteroskedastic linear regressions with many regressors, Broken adaptive ridge regression and its asymptotic properties, Multiple choice from competing regression models under multicollinearity based on standardized update, Consistent variable selection criteria in multivariate linear regression even when dimension exceeds sample size, Determining the number of canonical correlation pairs for high-dimensional vectors, On the exponentially weighted aggregate with the Laplace prior, Robust model selection in linear regression models using information complexity, Least squares model averaging based on generalized cross validation, Asymptotic comparison of (partial) cross-validation, GCV and randomized GCV in nonparametric regression, Modulation of estimators and confidence sets., The problem of regions, Second-order Stein: SURE for SURE and other applications in high-dimensional inference, Confidence sets centered at \(C_ p\)-estimators, Comparing six shrinkage estimators with large sample theory and asymptotically optimal prediction intervals, Bootstrapping multiple linear regression after variable selection, The extended Stein procedure for simultaneous model selection and parameter estimation, On bandwidth selection problems in nonparametric trend estimation under martingale difference errors, Optimal selection of sample-size dependent common subsets of covariates for multi-task regression prediction, Broken adaptive ridge regression for right-censored survival data, Optimal model averaging for multivariate regression models, A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models, Robust Lasso and its applications in healthcare data, Information criteria bias correction for group selection, Smoothing spline ANOVA models for large data sets with Bernoulli observations and the randomized GACV., Asymptotically minimax regret procedures in regression model selection and the magnitude of the dimension penalty., Selection criteria for scatterplot smoothers, Adaptive estimation in autoregression or \(\beta\)-mixing regression via model selection, Oracle inequalities for inverse problems, Spline adaptation to smoothness, Law of iterated logarithm and consistent model selection criterion in logistic regression, Robust model selection in regression via weighted likelihood methodology, Some connections between Bayesian and non-Bayesian methods for regression model selection, A regularization procedure for estimating cell kinetic parameters from flow-cytometry data, Tuning parameter selection in sparse regression modeling, Some variable selection procedures in multivariate linear regression models, Shrinking toward submodels in regression, Hypercube estimators: penalized least squares, submodel selection, and numerical stability, Model selection for forecasting, Selecting the best linear regression model. A classical approach, Asymptotically optimal selection of a piecewise polynomial estimator of a regression function, Model selection criteria based on cross-validatory concordance statistics, An effective selection of regression variables when the error distribution is incorrectly specified, Local and global robustness of regression estimators, Reweighting approximate GM estimators: Asymptotics and residual-based graphics, Selecting the best regression equation via the \(P\)-value of \(F\)-test, Oracle inequalities for the stochastic differential equations, Pointwise convergence in probability of general smoothing splines, Algorithms for the optimal identification of segment neighborhoods, Bayesian variable selection with strong heredity constraints, Model averaging procedure for varying-coefficient partially linear models with missing responses, Bootstrap order determination for ARMA models: a comparison between different model selection criteria, Bayesian group bridge for bi-level variable selection, Selecting important independent variables in linear regression models, A family of the information criteria using the phi-divergence for categorical data, Moderately clipped Lasso, Bandwidth selection for kernel estimate with correlated noise, The model selection criterion AICu., Wrappers for feature subset selection, A new criterion for variable selection, Two errors in statistical model fitting, Estimator selection: a new method with applications to kernel density estimation, Wavelet regression estimation in nonparametric mixed effect models, Asymptotics of AIC, BIC, and RMSEA for model selection in structural equation modeling, Generalized ridge regression, least squares with stochastic prior information, and Bayesian estimators, European exchange trading funds trading with locally weighted support vector regression, A comparison of the information and posterior probability criteria for model selection, Prediction error criterion for selecting variables in a linear regression model, Fast state-space methods for inferring dendritic synaptic connectivity, Spatial weights matrix selection and model averaging for spatial autoregressive models, On fitting distributed lag models subject to polynomial restrictions, The negative correlations between data-determined bandwidths and the optimal bandwidth, Local linear estimation for spatial random processes with stochastic trend and stationary noise, Optimization methods for regularization-based ill-posed problems: a survey and a multi-objective framework, Is \(C_{p}\) an empirical Bayes method for smoothing parameter choice?, Decision rules for the choice of structural equations, A seemingly unrelated regression model in a credibility framework., Statistical inference in dynamic panel data models, Asymptotic normality and consistency of semi-nonparametric regression estimators using an upwards \(F\) test truncation rule, Mean squared errors of forecast for selecting nonnested linear models and comparison with other criteria, On the consistency of the global minimizer of Mallow's criterion for nonparametric regression, Generalized linear model selection using \(R^2\), Resampling methods for variable selection in robust regression, A semiparametric model selection criterion with applications to the marginal structural model, Stock and bond return predictability: the discrimination power of model selection criteria, Influential data cases when the \(C_p\) criterion is used for variable selection in multiple linear regression, Evaluation of matching noise for imputation techniques based on nonparametric local linear regression estimators, Tikhonovs regularization method for ill-posed problems. A comparison of different methods for the determination of the regularization parameter, An unbiased \(C_p\) criterion for multivariate ridge regression, A robust coefficient of determination for regression, Some contributions to selection and estimation in the normal linear model, Comparison of biasing parameter computational techniques in ridge-type estimation, A survey of cross-validation procedures for model selection, High-dimensional Gaussian model selection on a Gaussian design, UPRE method for total variation parameter selection, An alternate version of the conceptual predictive statistic based on a symmetrized discrepancy measure, When do stepwise algorithms meet subset selection criteria?, Multiple penalty regression: fitting and extrapolating a discrete incomplete multi-way layout, Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation, Estimating the dimension of a model, The weak convergence of likelihood ratio random fields and its applications, Mixing least-squares estimators when the variance is unknown, Evaluation and selection of models for out-of-sample prediction when the sample size is small relative to the complexity of the data-generating process, Assessing global influential observations in modified ridge regression, An algebraic characterization of the optimum of regularized kernel methods, The weighted average information criterion for order selection in time series and regression models, A simple forward selection procedure based on false discovery rate control, Gaussian model selection with an unknown variance, On the degrees of freedom in shrinkage estimation, Improved AIC selection strategy for survival analysis, A geometric interpretation of Mallows' \(C_p\) statistic and an alternative plot in variable selection, Robust model selection using fast and robust bootstrap, Bayesian predictive simultaneous variable and transformation selection in the linear model., Concentration reversals in ridge regression, Robust model selection criteria for robust Liu estimator, The composite absolute penalties family for grouped and hierarchical variable selection, The GIC for model selection: A hypothesis testing approach, A decision rule for discarding principal components in regression, Asymptotic bootstrap corrections of AIC for linear regression models, Semiparametric regression model selections., On model selection via stochastic complexity in robust linear regression, Consistent bandwidth selection for kernel binary regression, Autoregressive model selection for multistep prediction, Minimax estimation in linear regression under restrictions, Some properties of inferences in misspecified linear models, Manufacturing cell operating characteristics, Flexible smoothing with \(B\)-splines and penalties. With comments and a rejoinder by the authors, Normal approximation rate and bias reduction for data-driven kernel smoothing estimator in a semiparametric regression model, PRESS model selection in repeated measures data., A faster algorithm for ridge regression of reduced rank data, VAR forecasting under misspecification, Variable selection in generalized random coefficient autoregressive models, A scalable surrogate \(L_0\) sparse regression method for generalized linear models with applications to large scale data, High-dimensional consistency of rank estimation criteria in multivariate linear model, Near-ideal model selection by \(\ell _{1}\) minimization, Efficient forecast tests for conditional policy forecasts, Least-squares forecast averaging, Feature subset selection for logistic regression via mixed integer optimization, Variable selection for recurrent event data via nonconcave penalized estimating function, Automatic model selection for partially linear models, Minimizing variable selection criteria by Markov chain Monte Carlo, Model selection criteria for a linear model to solve discrete ill-posed problems on the basis of singular decomposition and random projection, Contributions to multivariate analysis by Professor Yasunori Fujikoshi, Nonparametric estimation of trend in directional data, Forecasting cointegrated nonstationary time series with time-varying variance, Adaptation over parametric families of symmetric linear estimators, A general bilinear model to describe growth or decline time profiles, Ordered smoothers with exponential weighting, On the distribution function of various model selection criteria with stochastic regressors, Regularization in statistics, Consistency of high-dimensional AIC-type and \(C_p\)-type criteria in multivariate linear regression, Semiparametric Bayesian information criterion for model selection in ultra-high dimensional additive models, Toward optimal model averaging in regression models with time series errors, Model selection and estimation in high dimensional regression models with group SCAD, Second-order asymptotic theory for calibration estimators in sampling and missing-data problems, Tuning complexity in regularized kernel-based regression and linear system identification: the robustness of the marginal likelihood estimator, Variable selection problem in the censored regression models, Non-asymptotic adaptive prediction in functional linear models, Estimation of parameters in a generalized GMANOVA model based on an outer product analogy and least squares, Statistical model selection criteria, SLOPE-adaptive variable selection via convex optimization, Statistical inference using a weighted difference-based series approach for partially linear regression models, Model selection in linear mixed effect models, A modified \(C_{p}\) statistic in a system-of-equations model, Variable selection strategies in survival models with multiple imputations, Segmentation of the mean of heteroscedastic data via cross-validation, Optimal model selection in density estimation, A jackknife type approach to statistical model selection, Degrees of freedom in lasso problems, Risk hull method and regularization by projections of ill-posed inverse problems, Subset selection in Poisson regression, Penalized quadratic inference functions for semiparametric varying coefficient partially linear models with longitudinal data, Lasso penalized model selection criteria for high-dimensional multivariate linear regression analysis, Parametric or nonparametric? A parametricness index for model selection, Concentration inequalities for the exponential weighting method, Kernel methods in system identification, machine learning and function estimation: a survey, Robust model selection for a semimartingale continuous time regression from discrete data, Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks, Model averaging based on James-Stein estimators, On model selection from a finite family of possibly misspecified time series models, Generalized random forests, Forecasting with factor-augmented regression: a frequentist model averaging approach, Asymptotic analysis of the squared estimation error in misspecified factor models, Regularized LIML for many instruments, Equivalence of several methods for efficient best subsets selection in generalized linear models, Selection of components in principal component analysis: A comparison of methods, A simulation of the performance of \(C_{p}\) in model selection for logistic and Poisson regression, Prediction model averaging estimator, Optimal information criteria minimizing their asymptotic mean square errors, Determining the number of factors when the number of factors can increase with sample size, Accurate distributions of Mallows' \(\operatorname{C}_p\) and its unbiased modifications with applications to shrinkage estimation, Expected predictive least squares for model selection in covariance structures, Smooth predictive model fitting in regression, Optimal weighting of a priori statistics in linear estimation theory, Degrees of freedom in low rank matrix estimation, Model selection criteria for the leads-and-lags cointegrating regression, Model selection in the presence of nonstationarity, A regularization approach to the many instruments problem, Ranked sparsity: a cogent regularization framework for selecting and estimating feature interactions and polynomials, Least squares model averaging by Mallows criterion, The projected GSURE for automatic parameter tuning in iterative shrinkage methods, Subset selection in linear regression using generalized ridge estimator, The principle of penalized empirical risk in severely ill-posed problems, An improved \(C_p\) criterion for spline smoothing, An \(R\)-square coefficient based on final prediction error, Frequentist-Bayes lack-of-fit tests based on Laplace approximations, Nearly unbiased variable selection under minimax concave penalty, A high-dimensional Wilks phenomenon, Local influence for Liu estimators in semiparametric linear models, Averaging estimators for autoregressions with a near unit root, Akaike-type criteria and the reliability of inference: model selection versus statistical model specification, Model evaluation, discrepancy function estimation, and social choice theory, Model selection criteria in multivariate models with multiple structural changes, The regression trunk approach to discover treatment covariate interaction, Estimating semiparametric panel data models by marginal integration, Restructuring forward step of MARS algorithm using a new knot selection procedure based on a mapping approach, Penalized projection estimators of the Aalen multiplicative intensity, Tight conditions for consistency of variable selection in the context of high dimensionality, Slope heuristics: overview and implementation, A non-iterative optimization method for smoothness in penalized spline regression, APPLE: approximate path for penalized likelihood estimators, Numerical solution of the finite moment problem in a reproducing kernel Hilbert space, A variable selection procedure for econometric models, Estimation of dynamic mixed double factors model in high-dimensional panel data, A new method to discriminate between enzyme-kinetic models, Asymptotic optimality of generalized \(C_ L\), cross-validation, and generalized cross-validation in regression with heteroskedastic errors, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates, The focused information criterion for varying-coefficient partially linear measurement error models, ASP fits to multi-way layouts, Robust variable selection with application to quality of life research, Unnamed Item, Ridge-type regularization method for questionnaire data analysis, Latent variable selection for multidimensional item response theory models via \(L_{1}\) regularization, Order selection for same-realization predictions in autoregressive processes, Minimal penalties for Gaussian model selection, Consistent model specification tests based on \(k\)-nearest-neighbor estimation method, Concentration inequalities, counting processes and adaptive statistics, Optimum smoothing parameter selection for penalized least squares in form of linear mixed effect models, Piecewise linear solution paths with application to direct weight optimization, Variations on Ridge Traces in Regression, On improved loss estimation for shrinkage estimators, Anomalies in the Foundations of Ridge Regression, Performance of Robust GCV and Modified GCV for Spline Smoothing, Confounder selection via penalized credible regions, Hybid shrinkage estimators using penalty bases for the ordinal one-way layout, Selection of Variables in Multivariate Regression Models for Large Dimensions, Model selection: a Lagrange optimization approach, Compact discrepancy and chi-squared principles for over-determined inverse problems, Unnamed Item, A study of some ridge-type shrinkage estimators, Variable selection in functional additive regression models, Asymptotics of AIC, BIC and \(C_p\) model selection rules in high-dimensional regression, Least squares model averaging for two non-nested linear models, Adaptive density estimation of stationary \(\beta\)-mixing and \(\tau\)-mixing processes, Asymptotic optimality of a multivariate version of the generalized cross validation in adaptive smoothing splines, Fréchet means of curves for signal averaging and application to ECG data analysis, Optimized fixed-size kernel models for large data sets, Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods, Model selection strategies for identifying most relevant covariates in homoscedastic linear models, Robust model selection with flexible trimming, Stopping rules for iterative methods in nonnegatively constrained deconvolution, Non-intrusive low-rank separated approximation of high-dimensional stochastic models, Variable Selection in Semiparametric Linear Regression with Censored Data, An Algorithm for Enhancing Spreadsheet Regression with Out-of-Sample Statistics, Forecasting a long memory process subject to structural breaks, The adaptive L1-penalized LAD regression for partially linear single-index models, Aggregation for Gaussian regression, Robust Model Selection with LARS Based on S-estimators, Testing for Lack of Fit in Inverse Regression—with Applications to Biophotonic Imaging, Bayesian Model Selection using Test Statistics, Evaluation of generalized degrees of freedom for sparse estimation by replica method, From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation, Discussion of “From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation”, Cross-Validation, Risk Estimation, and Model Selection: Comment on a Paper by Rosset and Tibshirani, A Criterion for Optimal Predictive Model Selection, On the ``degrees of freedom of the lasso, Bias-corrected Kullback-Leibler distance criterion based model selection with covariables missing at random, Subspace Information Criterion for Model Selection, The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder)., Empirical Bayes vs. fully Bayes variable selection, Second-order bias-corrected AIC in multivariate normal linear models under non-normality, Autoregressive approximation in nonstandard situations: the fractionally integrated and non-invertible cases, Local behavior of sparse analysis regularization: applications to risk estimation, Spectral cut-off regularizations for ill-posed linear models, A randomized method for solving discrete ill-posed problems, Selecting mixed-effects models based on a generalized information criterion, Minimax regret comparison of hard and soft thresholding for estimating a bounded normal mean, Consistent linear model selection, Additive regularization trade-off: fusion of training and validation levels in kernel methods, Maxisets for model selection, ON THE SELECTION OF SUBSET AUTOREGRESSIVE TIME SERIES MODELS, Robust stepwise regression, Slack-variable models versus Scheffé's mixture models, A simulation study on classic and robust variable selection in linear regression, Adapting to unknown sparsity by controlling the false discovery rate, Adaptive minimax estimation of a fractional derivative, A plug-in bandwidth selector for nonparametric quantile regression, A discrepancy principle for the Landweber iteration based on risk minimization, Testing the order of a model, The forward search: theory and data analysis, Low Complexity Regularization of Linear Inverse Problems, An alternative quasi likelihood approach, Bayesian analysis and data-based inference for model specification, UPPER BOUNDS ON THE MINIMUM COVERAGE PROBABILITY OF CONFIDENCE INTERVALS IN REGRESSION AFTER MODEL SELECTION, Variable selection in linear regression based on ridge estimator, Degrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimate, Theory of Classification: a Survey of Some Recent Advances, Adaptive Posterior Mode Estimation of a Sparse Sequence for Model Selection, Variable Selection for Panel Count Data via Non-Concave Penalized Estimating Function, Density Estimation by Total Variation Penalized Likelihood Driven by the Sparsity ℓ1 Information Criterion, Akaike's Information Criterion in Generalized Estimating Equations, Model Selection in Estimating Equations, PARTIALLY LINEAR MODEL SELECTION BY THE BOOTSTRAP, Effective new methods for automated parameter selection in regularized inverse problems, Compressed and Penalized Linear Regression, Test for model selection using Cramér-von Mises distance in a fixed design regression setting, Model Selection Using Cramér–von Mises Distance, Weighted-averaging estimator for possible threshold in segmented linear regression model, A fast algorithm for optimizing ridge parameters in a generalized ridge regression by minimizing a model selection criterion, Robust model selection in 2D parametric motion estimation, Statistical estimation in the presence of possibly incorrect model assumptions, On the predictive risk in misspecified quantile regression, On Cross-Validation for Sparse Reduced Rank Regression, Cross‐validation and non‐parametric k nearest‐neighbour estimation, Variable selection and estimation in generalized linear models with the seamless ${\it L}_{{\rm 0}}$ penalty, Predictive performance of linear regression models, Generalized cross validation in variable selection with and without shrinkage, Time-varying nonlinear regression models: nonparametric estimation and model selection, Covariate Selection for Linear Errors-in-Variables Regression Models, Comparing and selecting spatial predictors using local criteria, Locally optimal adaptive smoothing splines