Recommendations
- STABILITY RESULTS IN LEARNING THEORY
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- scientific article; zbMATH DE number 6001978
- Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
- A survey on learning theory. I: Stability and generalization
Cited in
(only showing first 100 items - show all)- Learning with sample dependent hypothesis spaces
- A boosting approach for supervised Mahalanobis distance metric learning
- Multiclass classification with potential function rules: margin distribution and generalization
- Spectral Algorithms for Supervised Learning
- Statistical performance of support vector machines
- Primal and dual model representations in kernel-based learning
- Domain adaptation and sample bias correction theory and algorithm for regression
- Predictive inference with the jackknife+
- Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
- Maximization of AUC and buffered AUC in binary classification
- Least-square regularized regression with non-iid sampling
- Multi-kernel regularized classifiers
- Stability of randomized learning algorithms
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- Algorithmic stability and meta-learning
- Robustness and generalization
- Approximations and solution estimates in optimization
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- A theoretical framework for deep transfer learning
- On qualitative robustness of support vector machines
- Generalization Bounds for Some Ordinal Regression Algorithms
- A survey of cross-validation procedures for model selection
- Stability
- Stability of unstable learning algorithms
- Stable multi-label boosting for image annotation with structural feature selection
- Soft margin support vector classification as buffered probability minimization
- Generalization bounds for learning with linear, polygonal, quadratic and conic side knowledge
- Multi-output learning via spectral filtering
- Learning theory of distributed regression with bias corrected regularization kernel network
- Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels
- Analysis of support vector machines regression
- Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learning
- Leave-one-out cross-validation is risk consistent for Lasso
- The consistency of multicategory support vector machines
- A survey of algorithms and analysis for adaptive online learning
- Leave-One-Out Bounds for Kernel Methods
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Discussion of big Bayes stories and BayesBag
- Generalization bounds for metric and similarity learning
- On the Generalization Ability of On-Line Learning Algorithms
- Guaranteed Classification via Regularized Similarity Learning
- Approximation with polynomial kernels and SVM classifiers
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- Additive regularization trade-off: fusion of training and validation levels in kernel methods
- Stability analysis of learning algorithms for ontology similarity computation
- Efficiency of classification methods based on empirical risk minimization
- Transfer bounds for linear feature learning
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples
- From inexact optimization to learning via gradient concentration
- Oracle inequalities for cross-validation type procedures
- Support vector machines with applications
- SIRUS: stable and interpretable RUle set for classification
- The optimal solution of multi-kernel regularization learning
- Coefficient regularized regression with non-iid sampling
- On learnability, complexity and stability
- Perturbation of convex risk minimization and its application in differential private learning algorithms
- A note on application of integral operator in learning theory
- Regularized least square regression with dependent samples
- Error bounds for \(l^p\)-norm multiple kernel learning with least square loss
- Generalization performance of bipartite ranking algorithms with convex losses
- Regularization techniques and suboptimal solutions to optimization problems in learning from data
- Conditional predictive inference for stable algorithms
- Training regression ensembles by sequential target correction and resampling
- Regression learning with non-identically and non-independently sampling
- scientific article; zbMATH DE number 33716 (Why is no real title available?)
- Robustness of reweighted least squares kernel based regression
- STABILITY RESULTS IN LEARNING THEORY
- Composite kernel learning
- Theory of Classification: a Survey of Some Recent Advances
- Generalization bounds for averaged classifiers
- Signal recovery by stochastic optimization
- Gromov-Hausdorff stability of linkage-based hierarchical clustering methods
- Trading Variance Reduction with Unbiasedness: The Regularized Subspace Information Criterion for Robust Model Selection in Kernel Regression
- Robustness of general dichotomies
- Boosting and instability for regression trees
- scientific article; zbMATH DE number 7370585 (Why is no real title available?)
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- On the complexity analysis of the primal solutions for the accelerated randomized dual coordinate ascent
- State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings
- Structured sparsity and generalization
- Wasserstein-based fairness interpretability framework for machine learning models
- Kernelized elastic net regularization: generalization bounds, and sparse recovery
- The big data newsvendor: practical insights from machine learning
- Classifier learning with a new locality regularization method
- Benefit of Interpolation in Nearest Neighbor Algorithms
- Distribution-free consistency of empirical risk minimization and support vector regression
- Large margin vs. large volume in transductive learning
- A selective overview of deep learning
- Entropy-SGD: biasing gradient descent into wide valleys
- scientific article; zbMATH DE number 7370542 (Why is no real title available?)
- Design-unbiased statistical learning in survey sampling
- Stability and optimization error of stochastic gradient descent for pairwise learning
- scientific article; zbMATH DE number 6670747 (Why is no real title available?)
- Generalization bounds for regularized portfolio selection with market side information
- Approximation stability and boosting
- Multi-relational graph convolutional networks: generalization guarantees and experiments
- Indefinite kernel network with \(l^q\)-norm regularization
- Stability and generalization of learning algorithm: a new framework of stability
- A tight upper bound on the generalization error of feedforward neural networks
This page was built for publication: 10.1162/153244302760200704
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4779564)