Learnability, stability and uniform convergence
zbMATH Open1242.68247MaRDI QIDQ2896159FDOQ2896159
Authors: Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, Karthik Sridharan
Publication date: 13 July 2012
Published in: Journal of Machine Learning Research (JMLR) (Search for Journal in Brave)
Full work available at URL: http://www.jmlr.org/papers/v11/shalev-shwartz10a.html
Recommendations
- On learnability, complexity and stability
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- STABILITY RESULTS IN LEARNING THEORY
- A survey on learning theory. I: Stability and generalization
- Scale-sensitive dimensions, uniform convergence, and learnability
Nonparametric regression and quantile regression (62G08) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Computational learning theory (68Q32)
Cited In (46)
- Fluctuations, effective learnability and metastability in analysis
- Robust regression using biased objectives
- Efficient and reliable overlay networks for decentralized federated learning
- Test Data Reuse for the Evaluation of Continuously Evolving Classification Algorithms Using the Area under the Receiver Operating Characteristic Curve
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- Realizable learning is all you need
- Kernel selection with spectral perturbation stability of kernel matrix
- Relative utility bounds for empirically optimal portfolios
- Understanding generalization error of SGD in nonconvex optimization
- For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- On learnability, complexity and stability
- Title not available (Why is that?)
- Statistical computational learning
- Learnability with respect to fixed distributions
- Algorithmic stability for adaptive data analysis
- Title not available (Why is that?)
- Consistency of learning algorithms using Attouch-Wets convergence
- A selective overview of deep learning
- Optimal transport: fast probabilistic approximation with exact solvers
- Title not available (Why is that?)
- Stability and Convergence of Principal Component Learning Algorithms
- Quantitative stability of barycenters in the Wasserstein space
- Sample Size Estimates for Risk-Neutral Semilinear PDE-Constrained Optimization
- On the consistency of the empirical risk minimization principle based on algorithmic stability
- Complementary composite minimization, small gradients in general norms, and applications
- Sample complexity of sample average approximation for conditional stochastic optimization
- Parsimonious online learning with kernels via sparse projections in function space
- Stability behavior for unsupervised learning
- A Statistical Learning Theory Approach for the Analysis of the Trade-off Between Sample Size and Precision in Truncated Ordinary Least Squares
- Title not available (Why is that?)
- A theoretical framework for deep transfer learning
- Title not available (Why is that?)
- A moment-matching approach to testable learning and a new characterization of Rademacher complexity
- Stability is stable: connections between replicability, privacy, and adaptive generalization
- Diametrical risk minimization: theory and computations
- Compressive sensing and neural networks from a statistical learning perspective
- Sample average approximations of strongly convex stochastic programs in Hilbert spaces
- Closure properties of uniform convergence of empirical means and PAC learnability under a family of probability measures.
- Perturbation of convex risk minimization and its application in differential private learning algorithms
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- Stability
- Toward nonlinear local reinforcement learning rules through neuroevolution
- Title not available (Why is that?)
- Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems
- High-probability generalization bounds for pointwise uniformly stable algorithms
This page was built for publication: Learnability, stability and uniform convergence
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2896159)