Optimal two-stage procedures for estimating location and size of the maximum of a multivariate regression function
From MaRDI portal
(Redirected from Publication:741809)
Abstract: We propose a two-stage procedure for estimating the location and size M of the maximum of a smooth d-variate regression function f. In the first stage, a preliminary estimator of obtained from a standard nonparametric smoothing method is used. At the second stage, we "zoom-in" near the vicinity of the preliminary estimator and make further observations at some design points in that vicinity. We fit an appropriate polynomial regression model to estimate the location and size of the maximum. We establish that, under suitable smoothness conditions and appropriate choice of the zooming, the second stage estimators have better convergence rates than the corresponding first stage estimators of and M. More specifically, for -smooth regression functions, the optimal nonparametric rates and at the first stage can be improved to and , respectively, for . These rates are optimal in the class of all possible sequential estimators. Interestingly, the two-stage procedure resolves "the curse of the dimensionality" problem to some extent, as the dimension d does not control the second stage convergence rates, provided that the function class is sufficiently smooth. We consider a multi-stage generalization of our procedure that attains the optimal rate for any smoothness level starting with a preliminary estimator with any power-law rate at the first stage.
Recommendations
- Estimation of the location of the maximum of a regression function using extreme order statistics
- Adaptive nonparametric estimation of smooth multivariate functions.
- Bayesian stochastic estimation of the maximum of a regression function
- On nonparametric estimators of location of maximum
- An Adaptive Two‐stage Estimation Method for Additive Models
Cites work
- scientific article; zbMATH DE number 991833 (Why is no real title available?)
- scientific article; zbMATH DE number 3911487 (Why is no real title available?)
- scientific article; zbMATH DE number 3653342 (Why is no real title available?)
- A companion for the Kiefer-Wolfowitz-Blum stochastic approximation algorithm
- A two-stage hybrid procedure for estimating an inverse regression function
- Accelerated randomized stochastic optimization.
- Adaptive estimation of the mode of a multivariate density
- Adaptive nonparametric peak estimation
- Change-point estimation under adaptive sampling
- Least squares estimators of the mode of a unimodal regression function
- Lower rate of convergence for locating a maximum of a function
- Multidimensional Stochastic Approximation Methods
- Nonparametric estimation of the location of a maximum in a response surface.
- Optimal order of accuracy of search algorithms in stochastic optimization
- Stochastic Estimation of the Maximum of a Regression Function
Cited in
(4)- Maxima-finding algorithms for multidimensional samples: A two-phase approach
- Uniform convergence of local Fréchet regression with applications to locating extrema and time warping for metric space valued trajectories
- Bayesian mode and maximum estimation and accelerated rates of contraction
- Estimation and inference for minimizer and minimum of convex functions: optimality, adaptivity and uncertainty principles
This page was built for publication: Optimal two-stage procedures for estimating location and size of the maximum of a multivariate regression function
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q741809)