Recommendations
- STABILITY RESULTS IN LEARNING THEORY
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- scientific article; zbMATH DE number 6001978
- Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
- A survey on learning theory. I: Stability and generalization
Cited in
(only showing first 100 items - show all)- Generalization Bounds for Some Ordinal Regression Algorithms
- OCReP: an optimally conditioned regularization for pseudoinversion based neural training
- On group-wise \(\ell_p\) regularization: theory and efficient algorithms
- Stability and optimization error of stochastic gradient descent for pairwise learning
- On the consistency of the empirical risk minimization principle based on algorithmic stability
- A variance reduction framework for stable feature selection
- Complementary composite minimization, small gradients in general norms, and applications
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Differentially private SGD with non-smooth losses
- Design-unbiased statistical learning in survey sampling
- Multiple spectral kernel learning and a Gaussian complexity computation
- Interpretable machine learning: fundamental principles and 10 grand challenges
- Stability and generalization of learning algorithm: a new framework of stability
- Structured sparsity and generalization
- Kernel learning at the first level of inference
- Benefit of Interpolation in Nearest Neighbor Algorithms
- Entropy-SGD: biasing gradient descent into wide valleys
- Cross-validation on extreme regions
- Generalization performance of bipartite ranking algorithms with convex losses
- On the complexity analysis of the primal solutions for the accelerated randomized dual coordinate ascent
- Approximation stability and boosting
- Tensor networks in machine learning
- Stable multi-label boosting for image annotation with structural feature selection
- Stable Transductive Learning
- Approximating and learning by Lipschitz kernel on the sphere
- A theoretical framework for deep transfer learning
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Stability analysis of stochastic gradient descent for homogeneous neural networks and linear classifiers
- Distribution-free consistency of empirical risk minimization and support vector regression
- A Statistical Learning Theory Approach for the Analysis of the Trade-off Between Sample Size and Precision in Truncated Ordinary Least Squares
- Multi-output learning via spectral filtering
- Good edit similarity learning by loss minimization
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples
- A boosting approach for supervised Mahalanobis distance metric learning
- Multiclass classification with potential function rules: margin distribution and generalization
- Free dynamics of feature learning processes
- Multi-relational graph convolutional networks: generalization guarantees and experiments
- Predictive inference with the jackknife+
- Communication-efficient distributed multi-task learning with matrix sparsity regularization
- Stability is stable: connections between replicability, privacy, and adaptive generalization
- Learning with sample dependent hypothesis spaces
- Diametrical risk minimization: theory and computations
- Deep learning: a statistical viewpoint
- Trading Variance Reduction with Unbiasedness: The Regularized Subspace Information Criterion for Robust Model Selection in Kernel Regression
- Perturbation of convex risk minimization and its application in differential private learning algorithms
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Composite kernel learning
- Stability
- Regression learning with non-identically and non-independently sampling
- Sharp learning rates of coefficient-based l^q-regularized regression with indefinite kernels
- Statistical performance of support vector machines
- A survey of cross-validation procedures for model selection
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- Fast rates by transferring from auxiliary hypotheses
- Additive regularization trade-off: fusion of training and validation levels in kernel methods
- scientific article; zbMATH DE number 33716 (Why is no real title available?)
- Data-Mining Homogeneous Subgroups in Multiple Regression When Heteroscedasticity, Multicollinearity, and Missing Variables Confound Predictor Effects
- Guaranteed Classification via Regularized Similarity Learning
- On reject and refine options in multicategory classification
- Discriminatively learned hierarchical rank pooling networks
- Uncertainty learning of rough set-based prediction under a holistic framework
- scientific article; zbMATH DE number 1804116 (Why is no real title available?)
- Signal recovery by stochastic optimization
- Neural ODE Control for Classification, Approximation, and Transport
- Gromov-Hausdorff stability of linkage-based hierarchical clustering methods
- Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learning
- Bunched Fuzz: sensitivity for vector metrics
- scientific article; zbMATH DE number 7370585 (Why is no real title available?)
- Regularized least square regression with dependent samples
- scientific article; zbMATH DE number 6860836 (Why is no real title available?)
- Learning theory of distributed regression with bias corrected regularization kernel network
- How effectively train large-scale machine learning models?
- Robust pairwise learning with Huber loss
- A note on application of integral operator in learning theory
- Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems
- The big data newsvendor: practical insights from machine learning
- SOCKS: A Stochastic Optimal Control and Reachability Toolbox Using Kernel Methods
- Lotka-Volterra model with mutations and generative adversarial networks
- High-probability generalization bounds for pointwise uniformly stable algorithms
- Optimality of regularized least squares ranking with imperfect kernels
- 10.1162/153244303765208368
- Stochastic separation theorems
- A novel attribute reduction method with constraints on empirical risk and decision rule length
- Multi-split conformal prediction via Cauchy aggregation
- Wasserstein-based fairness interpretability framework for machine learning models
- Theory of Classification: a Survey of Some Recent Advances
- Application of integral operator for vector-valued regression learning
- Classifier learning with a new locality regularization method
- A note on stability of error bounds in statistical learning theory
- Domain adaptation and sample bias correction theory and algorithm for regression
- A survey of algorithms and analysis for adaptive online learning
- Discussion of big Bayes stories and BayesBag
- Generalization bounds for metric and similarity learning
- A tight upper bound on the generalization error of feedforward neural networks
- Conditional predictive inference for stable algorithms
- A survey on learning theory. I: Stability and generalization
- Measuring the Stability of Results From Supervised Statistical Learning
- On the Generalization Ability of On-Line Learning Algorithms
- Growing a list
- Indefinite kernel network with \(l^q\)-norm regularization
This page was built for publication: 10.1162/153244302760200704
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4779564)