Lower rate of convergence for locating a maximum of a function
From MaRDI portal
Recommendations
- scientific article; zbMATH DE number 4080680
- Lower bounds on the convergence rate of the Markov symmetric random search
- Lower bounds on the convergence rate of the Markov symmetric random search
- The adaptive rate of convergence in a problem of pointwise density estimation
- Estimation of the location of the maximum of a regression function using extreme order statistics
Cited in
(17)- The stochastic approximation method for the estimation of a multivariate probability density
- Recursive estimators of integrated squared density derivatives
- Stochastic approximation of global minimum points
- Algorithm portfolios for noisy optimization
- Complete cubic spline estimation of non-parametric regression functions
- Adaptive recursive kernel conditional density estimators under censoring data
- Simple and cumulative regret for continuous noisy optimization
- Optimal two-stage procedures for estimating location and size of the maximum of a multivariate regression function
- The multivariate Révész's online estimator of a regression function and its averaging
- Online estimation of hazard rate under random censoring
- Online estimation of integrated squared density derivatives
- Estimation and inference for minimizer and minimum of convex functions: optimality, adaptivity and uncertainty principles
- A compact law of the iterated logarithm for online estimator of hazard rate under random censoring
- Bayesian mode and maximum estimation and accelerated rates of contraction
- Accelerated randomized stochastic optimization.
- Online Statistical Inference for Stochastic Optimization via Kiefer-Wolfowitz Methods
- A companion for the Kiefer-Wolfowitz-Blum stochastic approximation algorithm
This page was built for publication: Lower rate of convergence for locating a maximum of a function
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1106585)