Recommendations
- STABILITY RESULTS IN LEARNING THEORY
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- scientific article; zbMATH DE number 6001978
- Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
- A survey on learning theory. I: Stability and generalization
Cited in
(only showing first 100 items - show all)- scientific article; zbMATH DE number 7370585 (Why is no real title available?)
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- On the complexity analysis of the primal solutions for the accelerated randomized dual coordinate ascent
- State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings
- Structured sparsity and generalization
- Wasserstein-based fairness interpretability framework for machine learning models
- Kernelized elastic net regularization: generalization bounds, and sparse recovery
- The big data newsvendor: practical insights from machine learning
- Classifier learning with a new locality regularization method
- Benefit of Interpolation in Nearest Neighbor Algorithms
- Distribution-free consistency of empirical risk minimization and support vector regression
- Large margin vs. large volume in transductive learning
- A selective overview of deep learning
- Entropy-SGD: biasing gradient descent into wide valleys
- scientific article; zbMATH DE number 7370542 (Why is no real title available?)
- Design-unbiased statistical learning in survey sampling
- Stability and optimization error of stochastic gradient descent for pairwise learning
- scientific article; zbMATH DE number 6670747 (Why is no real title available?)
- Generalization bounds for regularized portfolio selection with market side information
- Approximation stability and boosting
- Multi-relational graph convolutional networks: generalization guarantees and experiments
- Indefinite kernel network with \(l^q\)-norm regularization
- Stability and generalization of learning algorithm: a new framework of stability
- A tight upper bound on the generalization error of feedforward neural networks
- Hausdorff dimension, heavy tails, and generalization in neural networks*
- The role of mutual information in variational classifiers
- Diametrical risk minimization: theory and computations
- Good edit similarity learning by loss minimization
- scientific article; zbMATH DE number 1804116 (Why is no real title available?)
- Perturbation to enhance support vector machines for classification.
- Robust pairwise learning with Huber loss
- Post-selection inference via algorithmic stability
- scientific article; zbMATH DE number 7625184 (Why is no real title available?)
- scientific article; zbMATH DE number 7306919 (Why is no real title available?)
- Deep learning: a statistical viewpoint
- From undecidability of non-triviality and finiteness to undecidability of learnability
- Interpretable machine learning: fundamental principles and 10 grand challenges
- Kernel selection with spectral perturbation stability of kernel matrix
- scientific article; zbMATH DE number 7415114 (Why is no real title available?)
- Stability and generalization of graph convolutional networks in eigen-domains
- Measuring the Stability of Results From Supervised Statistical Learning
- Multiple spectral kernel learning and a Gaussian complexity computation
- Data-Driven Optimization: A Reproducing Kernel Hilbert Space Approach
- 10.1162/153244303765208368
- Understanding generalization error of SGD in nonconvex optimization
- On reject and refine options in multicategory classification
- Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution
- Concept drift detection and adaptation with hierarchical hypothesis testing
- A Bernstein-type inequality for functions of bounded interaction
- Bounding the difference between RankRC and RankSVM and application to multi-level rare class kernel ranking
- Generalization Error in Deep Learning
- Learning rates of gradient descent algorithm for classification
- Analysis of classifiers' robustness to adversarial perturbations
- OCReP: an optimally conditioned regularization for pseudoinversion based neural training
- On group-wise \(\ell_p\) regularization: theory and efficient algorithms
- Stable Transductive Learning
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- Communication-efficient distributed multi-task learning with matrix sparsity regularization
- Learning with sample dependent hypothesis spaces
- A boosting approach for supervised Mahalanobis distance metric learning
- Multiclass classification with potential function rules: margin distribution and generalization
- Spectral Algorithms for Supervised Learning
- Statistical performance of support vector machines
- Primal and dual model representations in kernel-based learning
- Domain adaptation and sample bias correction theory and algorithm for regression
- Predictive inference with the jackknife+
- Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
- Maximization of AUC and buffered AUC in binary classification
- Least-square regularized regression with non-iid sampling
- Multi-kernel regularized classifiers
- Stability of randomized learning algorithms
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- Algorithmic stability and meta-learning
- Robustness and generalization
- Approximations and solution estimates in optimization
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- A theoretical framework for deep transfer learning
- On qualitative robustness of support vector machines
- Generalization Bounds for Some Ordinal Regression Algorithms
- A survey of cross-validation procedures for model selection
- Stability
- Stability of unstable learning algorithms
- Stable multi-label boosting for image annotation with structural feature selection
- Soft margin support vector classification as buffered probability minimization
- Generalization bounds for learning with linear, polygonal, quadratic and conic side knowledge
- Multi-output learning via spectral filtering
- Learning theory of distributed regression with bias corrected regularization kernel network
- Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels
- Analysis of support vector machines regression
- Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learning
- Leave-one-out cross-validation is risk consistent for Lasso
- The consistency of multicategory support vector machines
- A survey of algorithms and analysis for adaptive online learning
- Leave-One-Out Bounds for Kernel Methods
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Discussion of big Bayes stories and BayesBag
- Generalization bounds for metric and similarity learning
- On the Generalization Ability of On-Line Learning Algorithms
- Guaranteed Classification via Regularized Similarity Learning
This page was built for publication: 10.1162/153244302760200704
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4779564)