Rates of convergence of nearest neighbor estimation under arbitrary sampling
From MaRDI portal
Publication:4857503
DOI10.1109/18.391248zbMath0839.93070OpenAlexW2134876488MaRDI QIDQ4857503
Sanjeev R. Kulkarni, Steven Eli Posner
Publication date: 29 November 1995
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/18.391248
metric entropyrandom samplingdeterministic samplingnearest neighbor estimationnonparametric regression estimates
Estimation and detection in stochastic control theory (93E10) Sampled-data control/observation systems (93C57)
Related Items
On the kernel rule for function classification ⋮ Optimal functional supervised classification with separation condition ⋮ On nonparametric classification for weakly dependent functional processes ⋮ Intrinsic Dimension Adaptive Partitioning for Kernel Methods ⋮ Supervised classification of diffusion paths ⋮ Nonparametric discrimination of areal functional data ⋮ Local nearest neighbour classification with applications to semi-supervised learning ⋮ Consistency and convergence rate for nearest subspace classifier ⋮ A tree-based regressor that adapts to intrinsic dimension ⋮ Rates of convergence for the \(k\)-nearest neighbor estimators with smoother regression functions ⋮ Bounds on the mean power-weighted nearest neighbour distance ⋮ Choice of neighbor order in nearest-neighbor classification ⋮ Active Nearest-Neighbor Learning in Metric Spaces ⋮ Rates of convergence for partitioning and nearest neighbor regression estimates with unbounded data ⋮ Residual variance estimation using a nearest neighbor statistic ⋮ Unnamed Item ⋮ Regression Estimation from an Individual Stable Sequence ⋮ Optimal global rates of convergence for nonparametric regression with unbounded data ⋮ Kernel regression estimation in a Banach space ⋮ Theoretical analysis of cross-validation for estimating the risk of the k-Nearest Neighbor classifier ⋮ Strongly consistent nonparametric forecasting and regression for stationary ergodic sequences. ⋮ Universal Bayes consistency in metric spaces ⋮ The complexity of model classes, and smoothing noisy data ⋮ Adaptive learning rates for support vector machines working on data with low intrinsic dimension ⋮ Marginal singularity and the benefits of labels in covariate-shift ⋮ Adaptive transfer learning ⋮ Robust nearest-neighbor methods for classifying high-dimensional data ⋮ Design adaptive nearest neighbor regression estimation ⋮ Nearest neighbor classification with dependent training sequences. ⋮ Unnamed Item ⋮ Minimax-optimal nonparametric regression in high dimensions ⋮ Kernel regression estimation when the regressor takes values in metric space