On the Complexity of Labeled Datasets

From MaRDI portal




Abstract: The Statistical Learning Theory (SLT) provides the foundation to ensure that a supervised algorithm generalizes the mapping f:mathcalXomathcalY given f is selected from its search space bias mathcalF. SLT depends on the Shattering coefficient function mathcalN(mathcalF,n) to upper bound the empirical risk minimization principle, from which one can estimate the necessary training sample size to ensure the probabilistic learning convergence and, most importantly, the characterization of the capacity of mathcalF, including its underfitting and overfitting abilities while addressing specific target problems. However, the analytical solution of the Shattering coefficient is still an open problem since the first studies by Vapnik and Chervonenkis in 1962, which we address on specific datasets, in this paper, by employing equivalence relations from Topology, data separability results by Har-Peled and Jones, and combinatorics. Our approach computes the Shattering coefficient for both binary and multi-class datasets, leading to the following additional contributions: (i) the estimation of the required number of hyperplanes in the worst and best-case classification scenarios and the respective Omega and O complexities; (ii) the estimation of the training sample sizes required to ensure supervised learning; and (iii) the comparison of dataset embeddings, once they (re)organize samples into some new space configuration. All results introduced and discussed along this paper are supported by the R package shattering (https://cran.r-project.org/web/packages/shattering).





Cited in
(1)






This page was built for publication: On the Complexity of Labeled Datasets

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q122466)