Universal sieve-based strategies for efficient estimation using machine learning tools
From MaRDI portal
(Redirected from Publication:1983607)
Abstract: Suppose that we wish to estimate a finite-dimensional summary of one or more function-valued features of an underlying data-generating mechanism under a nonparametric model. One approach to estimation is by plugging in flexible estimates of these features. Unfortunately, in general, such estimators may not be asymptotically efficient, which often makes these estimators difficult to use as a basis for inference. Though there are several existing methods to construct asymptotically efficient plug-in estimators, each such method either can only be derived using knowledge of efficiency theory or is only valid under stringent smoothness assumptions. Among existing methods, sieve estimators stand out as particularly convenient because efficiency theory is not required in their construction, their tuning parameters can be selected data adaptively, and they are universal in the sense that the same fits lead to efficient plug-in estimators for a rich class of estimands. Inspired by these desirable properties, we propose two novel universal approaches for estimating function-valued features that can be analyzed using sieve estimation theory. Compared to traditional sieve estimators, these approaches are valid under more general conditions on the smoothness of the function-valued features by utilizing flexible estimates that can be obtained, for example, using machine learning.
Recommendations
Cites work
- scientific article; zbMATH DE number 1057566 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A wavelet tour of signal processing. The sparse way.
- Asymptotic inference of causal effects with observational studies trimmed by the estimated propensity scores
- Contributions to a general asymptotic statistical theory. With the assistance of W. Wefelmeyer
- Convergence rates and asymptotic normality for series estimators
- Double/debiased machine learning for treatment and structural parameters
- Greedy function approximation: A gradient boosting machine.
- Inefficient estimators of the bivariate survival function for three models
- Investigating Smooth Multiple Regression by the Method of Average Derivatives
- Multidimensional variation for quasi-Monte Carlo
- Nearly unbiased variable selection under minimax concave penalty
- Nonparametric estimators which can be ``plugged-in.
- Nonparametric variable importance assessment using machine learning techniques
- On methods of sieves and penalization
- Stochastic gradient boosting.
- Targeted learning in data science. Causal inference for complex longitudinal studies
- The bootstrap and Edgeworth expansion
- Twicing Kernels and a Small Bias Property of Semiparametric Estimators
- Unified methods for censored longitudinal data and causality
- Weak convergence and empirical processes. With applications to statistics
Cited in
(2)
This page was built for publication: Universal sieve-based strategies for efficient estimation using machine learning tools
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1983607)