Structural risk minimization over data-dependent hierarchies
From MaRDI portal
Publication:4701167
DOI10.1109/18.705570zbMath0935.68090OpenAlexW2106491486MaRDI QIDQ4701167
Bartlett, Peter L., Robert C. Williamson, John Shawe-Taylor, Martin Anthony
Publication date: 21 November 1999
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/1f5a3dc5867218b86ab29cbf0046f2a02ee6ded5
Related Items
Complexity regularization via localized random penalties, Generalization bounds for averaged classifiers, Tikhonov, Ivanov and Morozov regularization for support vector machine learning, On data classification by iterative linear partitioning, Large width nearest prototype classification on general distance spaces, Kernels as features: on kernels, margins, and low-dimensional mappings, Recurrent Neural Networks with Small Weights Implement Definite Memory Machines, Complexity of hyperconcepts, Multi-kernel regularized classifiers, Ten More Years of Error Rate Research, Classification based on prototypes with spheres of influence, Why does deep and cheap learning work so well?, On the generalization error of fixed combinations of classifiers, Tests and classification methods in adaptive designs with applications, Robust cutpoints in the logical analysis of numerical data, Unnamed Item, Hybrid evolutionary algorithms in a SVR traffic flow forecasting model, On biased random walks, corrupted intervals, and learning under adversarial design, Estimation of convergence rate for multi-regression learning algorithm, Learning bounds via sample width for classifiers on finite metric spaces, The true sample complexity of active learning, The maximum vector-angular margin classifier and its fast training on large datasets using a core vector machine, An improved analysis of the Rademacher data-dependent bound using its self bounding property, On learning multicategory classification with sample queries., A hybrid classifier based on boxes and nearest neighbors, Data Dependent Priors in PAC-Bayes Bounds, Derivative reproducing properties for kernel methods in learning theory, Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data, A sharp concentration inequality with applications, Active Nearest-Neighbor Learning in Metric Spaces, Aspects of discrete mathematics and probability in the theory of machine learning, Approximation with polynomial kernels and SVM classifiers, The theoretical analysis of FDA and applications, PAC-Bayesian compression bounds on the prediction error of learning algorithms for classification, A local Vapnik-Chervonenkis complexity, PAC-Bayesian compression bounds on the prediction error of learning algorithms for classification, Learning big (image) data via coresets for dictionaries, Adaptive metric dimensionality reduction, Multi-category classifiers and sample width, Comment, Making Vapnik–Chervonenkis Bounds Accurate, Theory of Classification: a Survey of Some Recent Advances, A theory of learning with similarity functions, A permutation approach to validation*, Support Vector Machines for Dyadic Data, Distribution-free consistency of empirical risk minimization and support vector regression, Optimality of SVM: novel proofs and tighter bounds, Critical properties of the SAT/UNSAT transitions in the classification problem of structured data, Unnamed Item, Unnamed Item, PAC-Bayesian inequalities of some random variables sequences