10.1162/153244302760200704
From MaRDI portal
Publication:4779564
DOI10.1162/153244302760200704zbMATH Open1007.68083OpenAlexW2139338362MaRDI QIDQ4779564FDOQ4779564
Authors: Olivier Bousquet, André Elisseeff
Publication date: 27 November 2002
Published in: CrossRef Listing of Deleted DOIs (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/153244302760200704
Recommendations
- STABILITY RESULTS IN LEARNING THEORY
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- scientific article
- Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
- A survey on learning theory. I: Stability and generalization
Learning and adaptive systems in artificial intelligence (68T05) Computational learning theory (68Q32) Nonnumerical algorithms (68W05)
Cited In (only showing first 100 items - show all)
- A tight upper bound on the generalization error of feedforward neural networks
- Measuring the Stability of Results From Supervised Statistical Learning
- Indefinite kernel network with \(l^q\)-norm regularization
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- Title not available (Why is that?)
- The role of mutual information in variational classifiers
- Data-Driven Optimization: A Reproducing Kernel Hilbert Space Approach
- Title not available (Why is that?)
- Kernel selection with spectral perturbation stability of kernel matrix
- Concept drift detection and adaptation with hierarchical hypothesis testing
- A Bernstein-type inequality for functions of bounded interaction
- Bounding the difference between RankRC and RankSVM and application to multi-level rare class kernel ranking
- Stability and generalization of graph convolutional networks in eigen-domains
- Kernelized elastic net regularization: generalization bounds, and sparse recovery
- Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution
- State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings
- Generalization bounds for regularized portfolio selection with market side information
- Hausdorff dimension, heavy tails, and generalization in neural networks*
- Perturbation to enhance support vector machines for classification.
- Understanding generalization error of SGD in nonconvex optimization
- From undecidability of non-triviality and finiteness to undecidability of learnability
- Analysis of classifiers' robustness to adversarial perturbations
- Post-selection inference via algorithmic stability
- Title not available (Why is that?)
- Title not available (Why is that?)
- Large margin vs. large volume in transductive learning
- A selective overview of deep learning
- Title not available (Why is that?)
- Generalization Error in Deep Learning
- Learning rates of gradient descent algorithm for classification
- Stability and optimization error of stochastic gradient descent for pairwise learning
- OCReP: an optimally conditioned regularization for pseudoinversion based neural training
- On group-wise \(\ell_p\) regularization: theory and efficient algorithms
- Multiple spectral kernel learning and a Gaussian complexity computation
- Structured sparsity and generalization
- Benefit of Interpolation in Nearest Neighbor Algorithms
- Entropy-SGD: biasing gradient descent into wide valleys
- Design-unbiased statistical learning in survey sampling
- Stability and generalization of learning algorithm: a new framework of stability
- Interpretable machine learning: fundamental principles and 10 grand challenges
- On the complexity analysis of the primal solutions for the accelerated randomized dual coordinate ascent
- Approximation stability and boosting
- Stable Transductive Learning
- Distribution-free consistency of empirical risk minimization and support vector regression
- Multi-relational graph convolutional networks: generalization guarantees and experiments
- Good edit similarity learning by loss minimization
- Diametrical risk minimization: theory and computations
- Deep learning: a statistical viewpoint
- Communication-efficient distributed multi-task learning with matrix sparsity regularization
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- On reject and refine options in multicategory classification
- Title not available (Why is that?)
- Title not available (Why is that?)
- The big data newsvendor: practical insights from machine learning
- Robust pairwise learning with Huber loss
- 10.1162/153244303765208368
- Wasserstein-based fairness interpretability framework for machine learning models
- Classifier learning with a new locality regularization method
- Domain adaptation and sample bias correction theory and algorithm for regression
- Discussion of big Bayes stories and BayesBag
- On the Generalization Ability of On-Line Learning Algorithms
- Generalization bounds for metric and similarity learning
- Spectral Algorithms for Supervised Learning
- SIRUS: stable and interpretable RUle set for classification
- Robustness of general dichotomies
- Leave-One-Out Bounds for Kernel Methods
- Stability of unstable learning algorithms
- Least-square regularized regression with non-iid sampling
- Robustness and generalization
- Transfer bounds for linear feature learning
- Efficiency of classification methods based on empirical risk minimization
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- Soft margin support vector classification as buffered probability minimization
- Boosting and instability for regression trees
- Approximations and solution estimates in optimization
- Analysis of support vector machines regression
- Oracle inequalities for cross-validation type procedures
- STABILITY RESULTS IN LEARNING THEORY
- Algorithmic stability and meta-learning
- Approximation with polynomial kernels and SVM classifiers
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- Stability analysis of learning algorithms for ontology similarity computation
- On learnability, complexity and stability
- The consistency of multicategory support vector machines
- Regularization techniques and suboptimal solutions to optimization problems in learning from data
- Leave-one-out cross-validation is risk consistent for Lasso
- Error bounds for \(l^p\)-norm multiple kernel learning with least square loss
- Training regression ensembles by sequential target correction and resampling
- Primal and dual model representations in kernel-based learning
- Multi-kernel regularized classifiers
- Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
- Support vector machines with applications
- Coefficient regularized regression with non-iid sampling
- Generalization bounds for averaged classifiers
- Generalization bounds for learning with linear, polygonal, quadratic and conic side knowledge
- Maximization of AUC and buffered AUC in binary classification
- Stability of randomized learning algorithms
- From inexact optimization to learning via gradient concentration
- The optimal solution of multi-kernel regularization learning
- Robustness of reweighted least squares kernel based regression
This page was built for publication: 10.1162/153244302760200704
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4779564)