Universal sieve-based strategies for efficient estimation using machine learning tools (Q1983607): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Removed claim: reviewed by (P1447): Item:Q159461
ReferenceBot (talk | contribs)
Changed an Item
 
(3 intermediate revisions by 3 users not shown)
Property / reviewed by
 
Property / reviewed by: Tiago M. Magalhães / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: 2003.01856 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4352611 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonparametric estimators which can be ``plugged-in''. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Double/debiased machine learning for treatment and structural parameters / rank
 
Normal rank
Property / cites work
 
Property / cites work: Greedy function approximation: A gradient boosting machine. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic gradient boosting. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Inefficient estimators of the bivariate survival function for three models / rank
 
Normal rank
Property / cites work
 
Property / cites work: The bootstrap and Edgeworth expansion / rank
 
Normal rank
Property / cites work
 
Property / cites work: Investigating Smooth Multiple Regression by the Method of Average Derivatives / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3634482 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence rates and asymptotic normality for series estimators / rank
 
Normal rank
Property / cites work
 
Property / cites work: Twicing Kernels and a Small Bias Property of Semiparametric Estimators / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multidimensional Variation for Quasi-Monte Carlo / rank
 
Normal rank
Property / cites work
 
Property / cites work: Contributions to a general asymptotic statistical theory. With the assistance of W. Wefelmeyer / rank
 
Normal rank
Property / cites work
 
Property / cites work: On methods of sieves and penalization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4864293 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Unified methods for censored longitudinal data and causality / rank
 
Normal rank
Property / cites work
 
Property / cites work: Targeted learning in data science. Causal inference for complex longitudinal studies / rank
 
Normal rank
Property / cites work
 
Property / cites work: Weak convergence and empirical processes. With applications to statistics / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonparametric variable importance assessment using machine learning techniques / rank
 
Normal rank
Property / cites work
 
Property / cites work: Asymptotic inference of causal effects with observational studies trimmed by the estimated propensity scores / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nearly unbiased variable selection under minimax concave penalty / rank
 
Normal rank

Latest revision as of 15:01, 26 July 2024

scientific article
Language Label Description Also known as
English
Universal sieve-based strategies for efficient estimation using machine learning tools
scientific article

    Statements

    Universal sieve-based strategies for efficient estimation using machine learning tools (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    10 September 2021
    0 references
    Regression models attempt to explain the behavior of a variable of interest (\(Y\)) from explanatory variables (\(X\)). After a random sample, one can write the regression equation as \(y_i = g(x_i) + e_i\), with \(E(e_i|x_i) = 0\), \(i = 1, \ldots, n\). The regression equation is nonparametric when \(g(x)\) cannot be summarized by a finite set of parameters. In this context, sieve regression estimators can be used to estimate \(g(x)\). Examples of sieve regression are polynomials and splines. Unfortunately, some properties, like asymptotic efficiency, cannot be achieved using sieve regression estimators. In their work, Qiu, Luedtke and Carone proposed two approaches: estimating the unknown function with Highly Adaptive Lasso and using data-adaptive series based on an initial maximum likelihood fit, which can overcome some difficulties related to sieve theory without performance loss against existing methods.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    nonparametric inference
    0 references
    asymptotic efficiency
    0 references
    sieve estimation
    0 references
    0 references