Robustness and generalization
From MaRDI portal
Publication:420915
DOI10.1007/s10994-011-5268-1zbMath1242.68259MaRDI QIDQ420915
Publication date: 23 May 2012
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-011-5268-1
62J07: Ridge regression; shrinkage estimators (Lasso)
62G35: Nonparametric robustness
62H30: Classification and discrimination; cluster analysis (statistical aspects)
68T05: Learning and adaptive systems in artificial intelligence
Related Items
Partial differential equation regularization for supervised machine learning, On the Purity and Entropy of Mixed Gaussian States, The role of mutual information in variational classifiers, Mitigating robust overfitting via self-residual-calibration regularization, The risk of trivial solutions in bipartite top ranking, A tight upper bound on the generalization error of feedforward neural networks, Regularisation of neural networks by enforcing Lipschitz continuity, On the robustness of randomized classifiers to adversarial examples, Compressive sensing and neural networks from a statistical learning perspective, Understanding generalization error of SGD in nonconvex optimization, Achieving adversarial robustness via sparsity, System identification through Lipschitz regularized deep neural networks, Learning under \(p\)-tampering poisoning attacks, Generalization Error in Deep Learning
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Markov chains and stochastic stability
- Robust solutions of uncertain linear programs
- Efficient distribution-free learning of probabilistic concepts
- Hoeffding's inequality for uniformly ergodic Markov chains
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Weak convergence and empirical processes. With applications to statistics
- The generalization performance of ERM algorithm with strongly mixing observations
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- Local Rademacher complexities
- Robust Convex Optimization
- Theory of Classification: a Survey of Some Recent Advances
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- The Time Change Method and SDEs with Nonnegative Drift
- The Price of Robustness
- Distribution-free performance bounds for potential function rules
- Regression Quantiles
- Distribution-free inequalities for the deleted and holdout error estimates
- Scale-sensitive dimensions, uniform convergence, and learnability
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- Rademacher penalties and structural risk minimization
- Extension of the PAC framework to finite and countable Markov chains
- 10.1162/153244302760200704
- 10.1162/153244303321897690
- Robust Regression and Lasso
- Robust Statistics
- A fast dual algorithm for kernel logistic regression