scientific article; zbMATH DE number 893887

From MaRDI portal

zbMath0853.68150MaRDI QIDQ4881152

László Györfi, Luc P. Devroye, Gábor Lugosi

Publication date: 27 June 1996


Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.



Related Items

A penalized criterion for variable selection in classification, On nonparametric classification with missing covariates, Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder), A random forest guided tour, Is the \(k\)-NN classifier in high dimensions affected by the curse of dimensionality?, On the classification problem for Poisson point processes, Moments and root-mean-square error of the Bayesian MMSE estimator of classification error in the Gaussian model, Applications of regularized least squares to pattern classification, On the estimation of the mean density of random closed sets, Stein's identity, Fisher information, and projection pursuit: A triangulation, Persistence of plug-in rule in classification of high dimensional multivariate binary data, Optimal dyadic decision trees, Guest editorial: Learning theory, Structured large margin machines: sensitive to data distributions, Minimax optimal rates of convergence for multicategory classifications, Learning parallel portfolios of algorithms, Fast learning rates in statistical inference through aggregation, Surrogate regret bounds for generalized classification performance metrics, Complex sampling designs: uniform limit theorems and applications, \(L_{p}\)-norm Sauer-Shelah lemma for margin multi-category classifiers, Nonparametric statistics of dynamic networks with distinguishable nodes, On the consistency of a new kernel rule for spatially dependent data, On visual distances for spectrum-type functional data, Probabilistic clustering via Pareto solutions and significance tests, Classification rules based on distribution functions of functional depth, Classification with the pot-pot plot, Accelerated gradient boosting, Ranking and empirical minimization of \(U\)-statistics, A scale-based approach to finding effective dimensionality in manifold learning, Rates of convergence in active learning, Recursive aggregation of estimators by the mirror descent algorithm with averaging, Empirical risk minimization is optimal for the convex aggregation problem, Average case recovery analysis of tomographic compressive sensing, Coarse decision making and overfitting, Data-based decision rules about the convexity of the support of a distribution, Fast learning from \(\alpha\)-mixing observations, An empirical comparison of learning algorithms for nonparametric scoring: the \textsc{TreeRank} algorithm and other methods, Optimized fixed-size kernel models for large data sets, On Poincaré cone property, Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities, Aggregation for Gaussian regression, Simultaneous adaptation to the margin and to complexity in classification, Computing strategies for achieving acceptability: a Monte Carlo approach, Optimal rates of aggregation in classification under low noise assumption, Benchmarking local classification methods, Identifying predictive hubs to condense the training set of \(k\)-nearest neighbour classifiers, New insights into approximate Bayesian computation, Hyper-rectangular space partitioning trees: practical approach, Approximation by neural networks and learning theory, A strong uniform convergence rate of kernel conditional quantile estimator under random censorship, Measures of divergence on credal sets, Standard deviation of the longest common subsequence, On the sampling distribution of resubstitution and leave-one-out error estimators for linear classifiers, Associative naïve Bayes classifier: automated linking of gene ontology to medline documents, Boosting for high-dimensional linear models, Two-group classification via a biobjective margin maximization model, Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization, Optimal filters with multiresolution apertures, Optimal convex error estimators for classification, Local angles and dimension estimation from data on manifolds, An investigation of new graph invariants related to the domination number of random proximity catch digraphs, Nonasymptotic bounds on the \(L_{2}\) error of neural network regression estimates, Asymptotic properties of a nonparametric regression function estimator with randomly truncated data, Randomized algorithms for the synthesis of cautious adaptive controllers, Discriminative training via minimization of risk estimates based on Parzen smoothing, A new family of proximity graphs: class cover catch digraphs, Boosted ARTMAP: modifications to fuzzy ARTMAP motivated by boosting theory, Global and local two-sample tests via regression, Learning a priori constrained weighted majority votes, Approximating and learning by Lipschitz kernel on the sphere, Consistency of random forests, Adaptive Bayesian nonparametric regression using a kernel mixture of polynomials with application to partial linear models, Neural random forests, Minimum distance histograms with universal performance guarantees, On deep learning as a remedy for the curse of dimensionality in nonparametric regression, Adaptive concepts for stochastic partial differential equations, On the interpretation of ensemble classifiers in terms of Bayes classifiers, Aggregation using input-output trade-off, Conditional quantile sequential estimation for stochastic codes, A Hoeffding's inequality for uniformly ergodic diffusion process, Discrete minimax estimation with trees, Uniform rates of the Glivenko-Cantelli convergence and their use in approximating Bayesian inferences, Exact lower bounds for the agnostic probably-approximately-correct (PAC) machine learning model, Mean estimation and regression under heavy-tailed distributions: A survey, Precedence-inclusion patterns and relational learning, Optimal \(L_{1}\) bandwidth selection for variable kernel density estimates, Exact performance of error estimators for discrete classifiers, Square root penalty: Adaption to the margin in classification and in edge estimation, Optimality of SVM: novel proofs and tighter bounds, Prototype selection for dissimilarity-based classifiers, Universal consistency of delta estimators, On the consistency properties of linear and quadratic discriminant analyses, Complexities of convex combinations and bounding the generalization error in classification, Local Rademacher complexities, Nonparametrically consistent depth-based classifiers, Minimax fast rates for discriminant analysis with errors in variables, Strong consistency of factorial \(k\)-means clustering, Unconfused ultraconservative multiclass algorithms, Shuffled graph classification: theory and connectome applications, Kuznetsov independence for interval-valued expectations and sets of probability distributions: properties and algorithms, Hoeffding's inequality for uniformly ergodic Markov chains, Estimation of the optimal design of a nonlinear parametric regression problem via Monte Carlo experiments, Nonparametric additive model with grouped Lasso and maximizing area under the ROC curve, Learning from binary labels with instance-dependent noise, A nearest neighbor estimate of the residual variance, Simulation-based classification; a model-order-reduction approach for structural health monitoring, Simpler PAC-Bayesian bounds for hostile data, Classification with incomplete functional covariates, Large width nearest prototype classification on general distance spaces, Learning without concentration for general loss functions, Depth-weighted Bayes classification, Localization of VC classes: beyond local Rademacher complexities, On minimaxity of follow the leader strategy in the stochastic setting, Randomized nonlinear projections uncover high-dimensional structure, A consistent combined classification rule, Best subset binary prediction, Universal smoothing factor selection in density estimation: theory and practice. (With discussion), Nonasymptotic universal smoothing factors, kernel complexity and Yatracos classes, On the value of partial information for learning from examples, Fast \(DD\)-classification of functional data, Nonlinear orthogonal series estimates for random design regression, Probabilistic representation of complexity., Stochastic optimal growth model with risk sensitive preferences, Estimation of a jump point in random design regression, Large and moderate deviations for kernel-type estimators of the mean density of Boolean models, Bootstrap model selection for possibly dependent and heterogeneous data, A simple method for combining estimates to improve the overall error rates in classification, Total error in a plug-in estimator of level sets., Results in statistical discriminant analysis: A review of the former Soviet Union literature., On learning multicategory classification with sample queries., A universal strong law of large numbers for conditional expectations via nearest neighbors, Obtaining fast error rates in nonconvex situations, On estimation of surrogate models for multivariate computer experiments, Nonparametric quantile estimation using importance sampling, Improved classification rates under refined margin conditions, Leading strategies in competitive on-line prediction, Inferring strategies from observed actions: a nonparametric, binary tree classification approach, Choice of neighbor order in nearest-neighbor classification, Gibbs posterior for variable selection in high-dimensional classification and data mining, Discriminant analysis with independently repeated multivariate measurements: an \(L^2\) approach, Strong convergence in nonparametric regression with truncated dependent data, Learning from dependent observations, Segmenting magnetic resonance images via hierarchical mixture modelling, Reducing mechanism design to algorithm design via machine learning, Pattern recognition via projection-based \(k\)NN rules, Robust estimation and classification for functional data via projection-based depth notions, Regression in random design and warped wavelets, Bolstered error estimation, Classifier performance as a function of distributional complexity, Analysis of the consistency of a mixed integer programming-based multi-category constrained discriminant model, Determination of the optimal number of features for quadratic discriminant analysis via the normal approximation to the discriminant distribution, Statistical inference of minimum BD estimators and classifiers for varying-dimensional models, Statistical analysis of \(k\)-nearest neighbor collaborative recommendation, Navigating random forests and related advances in algorithmic modeling, A survey of cross-validation procedures for model selection, Active learning in heteroscedastic noise, A note on some algorithms for the Gibbs posterior, Information divergence estimation based on data-dependent partitions, Complexity-penalized estimation of minimum volume sets for dependent data, Quantization and clustering with Bregman divergences, Fast rates for support vector machines using Gaussian kernels, Fast learning rates for plug-in classifiers, Tree-structured regression and the differentiation of integrals, Nonparametric and nonlinear reconstruction of surfaces from qualitative observations, On depth measures and dual statistics. A methodology for dealing with general data, Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path, Boosted Bayesian network classifiers, Local likelihood regression in generalized linear single-index models with applications to microarray data, Capturing incomplete information in resource allocation problems through numerical patterns, A Hoeffding inequality for Markov chains using a generalized inverse, Entropy conditions for \(L_{r}\)-convergence of empirical processes, The Hilbert kernel regression estimate., Scale-sensitive dimensions and skeleton estimates for classification, Strongly consistent model selection for densities, Bayesian nearest-neighbor analysis via record value statistics and nonhomogeneous spatial Poisson processes, Consistency of support vector machines for forecasting the evolution of an unknown ergodic dynamical system from observations with unknown noise, Learning from uniformly ergodic Markov chains, Estimation of the conditional risk in classification: the swapping method, An introduction to some statistical aspects of PAC learning theory, A \(\mathbb R\)eal generalization of discrete AdaBoost, Higher order estimation at Lebesgue points, Rademacher complexity in Neyman-Pearson classification, Universally consistent regression function estimation using hierarchical \(B\)-splines, Nonparametric estimation of piecewise smooth regression functions, Independent rule in classification of multivariate binary data, A kernel-based combined classification rule, Convergence rates for smoothing spline estimators in varying coefficient models, Optimal convergence rates for Good's nonparametric maximum likelihood density estimator, Robust nearest-neighbor methods for classifying high-dimensional data, Formal methods in pattern recognition: A review, Inequalities for uniform deviations of averages from expectations with applications to nonparametric regression, Smooth discrimination analysis, On the asymptotic normality of the \(L_2\)-error in partitioning regression estimation, On best approximation by ridge functions, On the learnability of rich function classes, Prediction from randomly right censored data, Bayesian predictiveness, exchangeability and sufficientness in bacterial taxonomy, Lower bounds for the rate of convergence in nonparametric pattern recognition, On the rate of convergence of error estimates for the partitioning classification rule, An almost surely optimal combined classification rule, Learning and Convergence of the Normalized Radial Basis Functions Networks, Manifold Oblique Random Forests: Towards Closing the Gap on Convolutional Deep Networks, Unnamed Item, UNIVERSAL CODING AND PREDICTION ON ERGODIC RANDOM POINTS, The anticipatory profile. An attempt to describe anticipation as process, Arbitrarily Slow Convergence of Sequences of Linear Operators: A Survey, On Statistical Properties of Sets Fulfilling Rolling-Type Conditions, Distributed spectral pairwise ranking algorithms, Supervised Classification for a Family of Gaussian Functional Models, Shannon sampling and function reconstruction from point values, On statistical classification with incomplete covariates via filtering, Unnamed Item, On the Use of Reproducing Kernel Hilbert Spaces in Functional Classification, Vapnik–Chervonenkis dimension of axis-parallel cuts, Submodular Functions: Learnability, Structure, and Optimization, Unnamed Item, Calibrating sufficiently, On nonparametric classification for weakly dependent functional processes, Real estate price estimation in French cities using geocoding and machine learning, The Modal Age of Statistics, Design‐based properties of the nearest neighbor spatial interpolator and its bootstrap mean squared error estimator, A reduced-rank approach to predicting multiple binary responses through machine learning, Consistency of the \(k\)-nearest neighbors rule for functional data, Is there a role for statistics in artificial intelligence?, Hoeffding's inequality for non-irreducible Markov models, \(R^{\ast}\): a robust MCMC convergence diagnostic with uncertainty using decision tree classifiers, Unnamed Item, User-friendly Introduction to PAC-Bayes Bounds, Also for \(k\)-means: more data does not imply better performance, The role of mutual information in variational classifiers, Optimal discriminant analysis in high-dimensional latent factor models, Universal regression with adversarial responses, Probabilistic prediction for binary treatment choice: with focus on personalized medicine, Model-Assisted Estimation Through Random Forests in Finite Population Sampling, Consistency of the \(k\)-nearest neighbor classifier for spatially dependent data, Deep nonparametric regression on approximate manifolds: nonasymptotic error bounds with polynomial prefactors, On Learning and Convergence of RBF Networks in Regression Estimation and Classification, From statistical to causal learning, Impact of subsampling and tree depth on random forests, Active Learning in Multi-armed Bandits, A Note on Support Vector Machines with Polynomial Kernels, Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes, Learning Theory Estimates with Observations from General Stationary Stochastic Processes, Quantum learning: asymptotically optimal classification of qubit states, Analysis to Neyman-Pearson classification with convex loss function, Wavelet‐based estimators for mixture regression, Universal consistency of the k-NN rule in metric spaces and Nagata dimension, Approximation of Limit State Surfaces in Monotonic Monte Carlo Settings, with Applications to Classification, On a discrimination problem for a class of stochastic processes with ordered first-passage times, Unnamed Item, Unnamed Item, Asymptotic Normality for Regression Function Estimate Under Truncation and α-Mixing Conditions, Active Nearest-Neighbor Learning in Metric Spaces, Finite sample properties of system identification of ARX models under mixing conditions, Large deviations of divergence measures on partitions, Testing the manifold hypothesis, Nearest neighbor classification in infinite dimension, Wavelet‐based estimation of a discriminant function, Unnamed Item, Unnamed Item, An approximation result for nets in functional estimation, Classifier selection from a totally bounded class of functions, Random Projection RBF Nets for Multidimensional Density Estimation, Nonparametric regression function estimation using interaction least squares splines and complexity regularization., Supervised Learning by Support Vector Machines, An iterated classification rule based on auxiliary pseudo-predictors., Regularization and statistical learning theory for data analysis., GDP nowcasting with ragged-edge data: a semi-parametric modeling, A Note on the Size of Denoising Neural Networks, Adaptive ABC model choice and geometric summary statistics for hidden Gibbs random fields, Measuring the Capacity of Sets of Functions in the Analysis of ERM, Theory of Classification: a Survey of Some Recent Advances, Finding blocks and other patterns in a random coloring of ℤ, Optimal survey schemes for stochastic gradient descent with applications to M-estimation, Theoretical analysis of cross-validation for estimating the risk of the k-Nearest Neighbor classifier, On the limits of clustering in high dimensions via cost functions, Vertex nomination via seeded graph matching, Aggregating classifiers via Rademacher–Walsh polynomials, General Error Estimates for the Longstaff–Schwartz Least-Squares Monte Carlo Algorithm, A nearest-neighbor-based ensemble classifier and its large-sample optimality, The limit distribution of the maximum probability nearest-neighbour ball, Unnamed Item, Estimating tail decay for stationary sequences via extreme values, Unnamed Item, Online regularized generalized gradient classification algorithms, Multikernel Regression with Sparsity Constraint, Convergence Rates and Decoupling in Linear Stochastic Approximation Algorithms, Strong consistency of a kernel-based rule for spatially dependent data, Maximum Likelihood Estimation of a Multi-Dimensional Log-Concave Density, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Optimization by Gradient Boosting, Regularization: From Inverse Problems to Large-Scale Machine Learning, Analysis of k-partite ranking algorithm in area under the receiver operating characteristic curve criterion, The Tangent Classifier, COBRA: a combined regression strategy, On the asymptotics of random forests, Performance of empirical risk minimization in linear aggregation, Optimal classification and nonparametric regression for functional data, A dynamic model of classifier competence based on the local fuzzy confusion matrix and the random reference classifier, One-pass AUC optimization, Dimensionality reduction on the Cartesian product of embeddings of multiple dissimilarity matrices, Classification in general finite dimensional spaces with the \(k\)-nearest neighbor rule, Asymptotic results for multivariate estimators of the mean density of random closed sets, Classification with asymmetric label noise: consistency and maximal denoising, Pointwise universal consistency of nonparametric density estimators, Bregman superquantiles. Estimation methods and applications, Classification error in multiclass discrimination from Markov data, On the kernel rule for function classification, Mining evolving data streams for frequent patterns, An asymptotically optimal kernel combined classifier, A consistency result for functional SVM by spline interpolation, Estimation of a time-dependent density, Supervised classification and mathematical optimization, Dilemmas of robust analysis of economic data streams, Quantitative error estimates for a least-squares Monte Carlo algorithm for American option pricing, Cellular tree classifiers, On regularization algorithms in learning theory, Metrics for labelled Markov processes, Inverse statistical learning, Universally consistent vertex classification for latent positions graphs, Estimation of a distribution from data with small measurement errors, On the layered nearest neighbour estimate, the bagged nearest neighbour estimate and the random forest method in regression and classification, Optimal rates for plug-in estimators of density level sets, Generalized density clustering, On combinatorial testing problems, Overlaying classifiers: A practical approach to optimal scoring, Supervised classification of diffusion paths, On local times, density estimation and supervised classification from functional data, The learning rate of \(l_2\)-coefficient regularized classification with strong loss, Regularization in statistics, A partial overview of the theory of statistics with functional data, A statistical view of clustering performance through the theory of \(U\)-processes, Information dependency: strong consistency of Darbellay-Vajda partition estimators, Application of copulas to multivariate control charts, Bayesian hypothesis testing for pattern discrimination in brain decoding, Confidence bands for least squares support vector machine classifiers: a regression approach, Concentration inequalities and laws of large numbers under epistemic and regular irrelevance, Adaptive partitioning schemes for bipartite ranking, On the rates of convergence of simulation-based optimization algorithms for optimal stopping problems, On extensions of Hoeffding's inequality for panel data, On the convergence of Shannon differential entropy, and its connections with density and entropy estimation, Pattern recognition with ordered labels, Linear classifiers are nearly optimal when hidden variables have diverse effects, Robustness and generalization, Classification when the covariate vectors have unequal dimensions, On Dobrushin's inequality, Lower bounds for comparison based evolution strategies using VC-dimension and sign patterns, The rate of the convergence of the mean score in random sequence comparison, Nonparametric density estimation for symmetric distributions by contaminated data, Risk bounds for CART classifiers under a margin condition, Multi-output learning via spectral filtering, A tree-based regressor that adapts to intrinsic dimension, Pattern recognition based on canonical correlations in a high dimension low sample size context, Indexability, concentration, and VC theory, An affine invariant \(k\)-nearest neighbor regression estimate, Multiclass classification with potential function rules: margin distribution and generalization, Heteroscedastic linear feature extraction based on sufficiency conditions, Exact representation of the second-order moments for resubstitution and leave-one-out error estimation for linear discriminant analysis in the univariate heteroskedastic Gaussian model, Some results on classifier selection with missing covariates, Functional data clustering via piecewise constant nonparametric density estimation, Parametric or nonparametric? A parametricness index for model selection, Diffusion learning algorithms for feedforward neural networks, Manifold matching: joint optimization of fidelity and commensurability, Does modeling lead to more accurate classification? A study of relative efficiency in linear classification, Classification algorithms using adaptive partitioning, Pricing Bermudan options by nonparametric regression: optimal rates of convergence for lower estimates, Optimal exponential bounds on the accuracy of classification, Nonparametric estimation of a maximum of quantiles, Asymptotic analysis of estimators on multi-label data, Cox process functional learning, Asymptotics for regression models under loss of identifiability, Kernel regression estimation for incomplete data with applications, Safe autonomy under perception uncertainty using chance-constrained temporal logic, Estimation of the essential supremum of a regression function, Learning rates for multi-kernel linear programming classifiers, Construction and evaluation of classifiers for forensic document analysis, Morphological perceptrons with competitive learning: lattice-theoretical framework and constructive learning algorithm, Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions, Analysis of the rate of convergence of least squares neural network regression estimates in case of measurement errors, Density estimation by the penalized combinatorial method, Asymptotics of cross-validated risk estimation in estimator selection and performance assess\-ment, Classification with multiple independent measurements under a separate sampling scheme, An empirical study of the complexity and randomness of prediction error sequences, Imputation Scores, A nearest-neighbor based nonparametric test for viral remodeling in heterogeneous single-cell proteomic data, Integrating prior domain knowledge into discriminative learning using automatic model construction and phantom examples, PAC-Bayesian bounds for randomized empirical risk minimizers, A best linear threshold classification with scale mixture of skew normal populations, Optimal weighted nearest neighbour classifiers, Relaxing support vectors for classification, Have I seen you before? Principles of Bayesian predictive classification revisited, On signal representations within the Bayes decision framework, A theoretical analysis of the peaking phenomenon in classification, Classification using proximity catch digraphs, Analysis of the rate of convergence of two regression estimates defined by neural features which are easy to implement, Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins., Generalization error of combined classifiers., Divergence-type errors of smooth Barron-type density estimators., Optimal approximations made easy, A new test for randomness and its application to some cryptographic problems, How well can a regression function be estimated if the distribution of the (random) design is concentrated on a finite set?, A topological approach to inferring the intrinsic dimension of convex sensing data, Complexity regularization via localized random penalties, On the use of random forest for two-sample testing, Process consistency for AdaBoost., On the Bayes-risk consistency of regularized boosting methods., Optimal aggregation of classifiers in statistical learning., On data classification by iterative linear partitioning, Geometric linear discriminant analysis for pattern recognition, A probabilistic theory of clustering, Empirical variance minimization with applications in variance reduction and optimal control, Perturbation-based classifier, Regime switching optimal growth model with risk sensitive preferences, Complete statistical theory of learning, Optimal functional supervised classification with separation condition, Manifold regularization based on Nyström type subsampling, A simple approach to construct confidence bands for a regression function with incomplete data, Nonparametric discrimination of areal functional data, Banzhaf random forests: cooperative game theory based random forests with consistency, Relation between weight size and degree of over-fitting in neural network regression, Uniform approximation of Vapnik-Chervonenkis classes, On Hölder fields clustering, Variance reduction for Markov chains with application to MCMC, Local nearest neighbour classification with applications to semi-supervised learning, Density estimation with minimization of \(U\)-divergence, Robust classification via MOM minimization, Classification via local multi-resolution projections, On the empirical estimation of integral probability metrics, Classification with minimax fast rates for classes of Bayes rules with sparse representation, The false discovery rate for statistical pattern recognition, Penalized empirical risk minimization over Besov spaces, Empirical measures for incomplete data with applications, Plugin procedure in segmentation and application to hyperspectral image segmentation, Noisy independent factor analysis model for density estimation and classification, A weighted \(k\)-nearest neighbor density estimate for geometric inference, Adaptive kernel methods using the balancing principle, Classification with guaranteed probability of error, Algorithms for optimal dyadic decision trees, The true sample complexity of active learning, Bayesian instance selection for the nearest neighbor rule, Sharp instruments for classifying compliers and generalizing causal effects, Posterior concentration for Bayesian regression trees and forests, Minimax optimal rates for Mondrian trees and forests, Identifiability of nonparametric mixture models and Bayes optimal clustering, Learning with mitigating random consistency from the accuracy measure, An optimal weight semi-supervised learning machine for neural networks with time delay, Analysis of the rate of convergence of fully connected deep neural network regression estimates with smooth activation function, Lower bounds on the rate of convergence of nonparametric regression estimates, Analysis of two gradient-based algorithms for on-line regression, Concentration inequalities for two-sample rank processes with application to bipartite ranking, Over-parametrized deep neural networks minimizing the empirical risk do not generalize well, Optimal rates for nonparametric F-score binary classification via post-processing, Learning sets with separating kernels, Linear components of quadratic classifiers, Generalized canonical correlation analysis for classification, Speculate-correct error bounds for \(k\)-nearest neighbor classifiers, Kernel classification with missing data and the choice of smoothing parameters, A note on margin-based loss functions in classification, Prediction for discrete time series, Memory-based reduced modelling and data-based estimation of opinion spreading, Bandwidth choice for nonparametric classification, Model selection in utility-maximizing binary prediction, Efficient global maximum likelihood estimation through kernel methods, A population background for nonparametric density-based clustering, A topologically valid definition of depth for functional data, On classification with nonignorable missing data, Estimation of an improved surrogate model in uncertainty quantification by neural networks, A nearest neighbor characterization of Lebesgue points in metric measure spaces, On histogram-based regression and classification with incomplete data, Hoeffding's inequality for Markov processes via solution of Poisson's equation, Limits to classification and regression estimation from ergodic processes, Wavelet-based estimation of generalized discriminant functions, The VC-dimension of axis-parallel boxes on the torus, Universal Bayes consistency in metric spaces, On the rate of convergence of fully connected deep neural network regression estimates, Set structured global empirical risk minimizers are rate optimal in general dimensions, Fast generalization error bound of deep learning without scale invariance of activation functions, Rates of convergence in the two-island and isolation-with-migration models, On the rate of convergence of image classifiers based on convolutional neural networks, Stopping criteria for, and strong convergence of, stochastic gradient descent on Bottou-Curtis-Nocedal functions, On the maximal deviation of kernel regression estimators with NMAR response variables, Towards convergence rate analysis of random forests for classification, Mixing strategies for density estimation., Maximum likelihood estimation of smooth monotone and unimodal densities., Nearest neighbor classification with dependent training sequences., On weak base hypotheses and their implications for boosting regression and classification, Randomized allocation with nonparametric estimation for a multi-armed bandit problem with covariates, Models under which random forests perform badly; consequences for applications, Statistical learning control of uncertain systems: theory and algorithms., Hierarchical classifiers based on neighbourhood criteria with adaptive computational cost, Statistical learning from biased training samples, Random forest estimation of conditional distribution functions and conditional quantiles, Analysis of convolutional neural network image classifiers in a hierarchical max-pooling model with additional local pooling, Distribution-free consistency of kernel non-parametric M-estimators., On cross-validation in kernel and partitioning regression estimation.