Aggregation for Gaussian regression

From MaRDI portal
Publication:2456016

DOI10.1214/009053606000001587zbMath1209.62065arXiv0710.3654OpenAlexW3106224380WikidataQ105192836 ScholiaQ105192836MaRDI QIDQ2456016

Alexandre B. Tsybakov, Florentina Bunea, Marten H. Wegkamp

Publication date: 17 October 2007

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0710.3654



Related Items

Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Estimation in nonparametric regression model with additive and multiplicative noise via Laguerre series, Variance function estimation in regression model via aggregation procedures, Simple proof of the risk bound for denoising by exponential weights for asymmetric noise distributions, Theory of adaptive estimation, Prediction of time series by statistical learning: general losses and fast rates, Quasi-likelihood and/or robust estimation in high dimensions, High-dimensional regression with unknown variance, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, Sparse estimation by exponential weighting, Anisotropic functional deconvolution with long-memory noise: the case of a multi-parameter fractional Wiener sheet, Unnamed Item, Greedy algorithms for prediction, Combining a relaxed EM algorithm with Occam's razor for Bayesian variable selection in high-dimensional regression, A nonlinear aggregation type classifier, Best subset selection via a modern optimization lens, Performance of empirical risk minimization in linear aggregation, Some sharp performance bounds for least squares regression with \(L_1\) regularization, Near-ideal model selection by \(\ell _{1}\) minimization, SLOPE is adaptive to unknown sparsity and asymptotically minimax, Sparsity in penalized empirical risk minimization, Aggregation via empirical risk minimization, Deconvolution model with fractional Gaussian noise: a minimax study, AIC for the Lasso in generalized linear models, From local kernel to nonlocal multiple-model image denoising, Classification of longitudinal data through a semiparametric mixed‐effects model based on lasso‐type estimators, Minimax lower bounds for the simultaneous wavelet deconvolution with fractional Gaussian noise and unknown kernels, Estimation of matrices with row sparsity, Simultaneous analysis of Lasso and Dantzig selector, Laplace deconvolution with dependent errors: a minimax study, Anisotropic de-noising in functional deconvolution model with dimension-free convergence rates, Minimax adaptive wavelet estimator for the anisotropic functional deconvolution model with unknown kernel, Blind deconvolution model in periodic setting with fractional Gaussian noise, Optimal equivariant prediction for high-dimensional linear models with arbitrary predictor covariance, Sparse linear regression models of high dimensional covariates with non-Gaussian outliers and Berkson error-in-variable under heteroscedasticity, On minimax convergence rates under \(L^p\)-risk for the anisotropic functional deconvolution model, Sparse recovery under matrix uncertainty, Anisotropic functional deconvolution for the irregular design: A minimax study, Lasso-type estimators for semiparametric nonlinear mixed-effects models estimation, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Sharp oracle inequalities for aggregation of affine estimators, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, Aggregation of estimators and stochastic optimization, On the optimality of the empirical risk minimization procedure for the convex aggregation problem, Adaptive Dantzig density estimation, \(\ell_1\)-penalized quantile regression in high-dimensional sparse models, Autoregressive process modeling via the Lasso procedure, Empirical risk minimization is optimal for the convex aggregation problem, High-dimensional additive hazards models and the lasso, Model selection in regression under structural constraints, Laplace deconvolution with noisy observations, Optimal model selection in heteroscedastic regression using piecewise polynomial functions, Sparse regression learning by aggregation and Langevin Monte-Carlo, On the asymptotic properties of the group lasso estimator for linear models, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, General oracle inequalities for model selection, On the conditions used to prove oracle results for the Lasso, MAP model selection in Gaussian regression, PAC-Bayesian bounds for sparse regression estimation with exponential weights, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Sparsity considerations for dependent variables, The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Least squares after model selection in high-dimensional sparse models, On the optimality of the aggregate with exponential weights for low temperatures, Mirror averaging with sparsity priors, Multichannel deconvolution with long-range dependence: a minimax study, A new approach to estimator selection, Sign-constrained least squares estimation for high-dimensional regression, Model averaging by jackknife criterion in models with dependent data, Transductive versions of the Lasso and the Dantzig selector, Kullback-Leibler aggregation and misspecified generalized linear models, Estimator selection with respect to Hellinger-type risks, Generalization of constraints for high dimensional regression problems, Oracle inequalities and optimal inference under group sparsity, Optimal learning with \textit{Q}-aggregation, Simultaneous adaptation to the margin and to complexity in classification, Optimal Kullback-Leibler aggregation in mixture density estimation by maximum likelihood, Microlocal Analysis of the Geometric Separation Problem, Optimal rates of aggregation in classification under low noise assumption, Unnamed Item, Learning by mirror averaging, Nonparametric sequential prediction of time series, Pivotal estimation via square-root lasso in nonparametric regression, Structured, Sparse Aggregation, Exponential screening and optimal rates of sparse estimation, Estimation of high-dimensional low-rank matrices, Estimator selection in the Gaussian setting, High-dimensional Gaussian model selection on a Gaussian design, SPADES and mixture models, On the sensitivity of the Lasso to the number of predictor variables, Aggregated wavelet estimation and its application to ultra-fast fMRI, Lasso-type recovery of sparse representations for high-dimensional data, A universal procedure for aggregating estimators, Mixing least-squares estimators when the variance is unknown, Generalized mirror averaging and \(D\)-convex aggregation, Linear and convex aggregation of density estimators, Some theoretical results on the grouped variables Lasso, Robust forecast combinations, Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity, Sharp connections between Berry-Esseen characteristics and Edgeworth expansions for stationary processes, Non-parametric Poisson regression from independent and weakly dependent observations by model selection, Anisotropic functional Laplace deconvolution, Aggregation using input-output trade-off, Sharp Oracle Inequalities for Square Root Regularization, Robust subset selection, Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices, Structured estimation for the nonparametric Cox model, Aggregating estimates by convex optimization, Lasso and probabilistic inequalities for multivariate point processes, Adaptive estimation over anisotropic functional classes via oracle approach, Sparse high-dimensional varying coefficient model: nonasymptotic minimax study, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates


Uses Software


Cites Work