On the Complexity of Labeled Datasets
From MaRDI portal
Abstract: The Statistical Learning Theory (SLT) provides the foundation to ensure that a supervised algorithm generalizes the mapping given is selected from its search space bias . SLT depends on the Shattering coefficient function to upper bound the empirical risk minimization principle, from which one can estimate the necessary training sample size to ensure the probabilistic learning convergence and, most importantly, the characterization of the capacity of , including its underfitting and overfitting abilities while addressing specific target problems. However, the analytical solution of the Shattering coefficient is still an open problem since the first studies by Vapnik and Chervonenkis in , which we address on specific datasets, in this paper, by employing equivalence relations from Topology, data separability results by Har-Peled and Jones, and combinatorics. Our approach computes the Shattering coefficient for both binary and multi-class datasets, leading to the following additional contributions: (i) the estimation of the required number of hyperplanes in the worst and best-case classification scenarios and the respective and complexities; (ii) the estimation of the training sample sizes required to ensure supervised learning; and (iii) the comparison of dataset embeddings, once they (re)organize samples into some new space configuration. All results introduced and discussed along this paper are supported by the R package shattering (https://cran.r-project.org/web/packages/shattering).
This page was built for publication: On the Complexity of Labeled Datasets
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q122466)