10.1162/153244302760200704
From MaRDI portal
Publication:4779564
DOI10.1162/153244302760200704zbMath1007.68083OpenAlexW2139338362MaRDI QIDQ4779564
Olivier Bousquet, André Elisseeff
Publication date: 27 November 2002
Published in: CrossRef Listing of Deleted DOIs (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/153244302760200704
Computational learning theory (68Q32) Nonnumerical algorithms (68W05) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Generalization bounds for regularized portfolio selection with market side information, Deep learning: a statistical viewpoint, Stability and generalization of graph convolutional networks in eigen-domains, On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent, Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent, Data-Mining Homogeneous Subgroups in Multiple Regression When Heteroscedasticity, Multicollinearity, and Missing Variables Confound Predictor Effects, Leave-One-Out Bounds for Kernel Methods, Graphical Convergence of Subgradients in Nonconvex Optimization and Learning, Trading Variance Reduction with Unbiasedness: The Regularized Subspace Information Criterion for Robust Model Selection in Kernel Regression, Benefit of Interpolation in Nearest Neighbor Algorithms, Conditional predictive inference for stable algorithms, The role of optimization in some recent advances in data-driven decision-making, Average Sensitivity of Graph Algorithms, Variance reduced Shapley value estimation for trustworthy data valuation, From undecidability of non-triviality and finiteness to undecidability of learnability, Neural ODE Control for Classification, Approximation, and Transport, SOCKS: A Stochastic Optimal Control and Reachability Toolbox Using Kernel Methods, Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems, Lotka-Volterra model with mutations and generative adversarial networks, High-probability generalization bounds for pointwise uniformly stable algorithms, Unnamed Item, Unnamed Item, Unnamed Item, Optimality of regularized least squares ranking with imperfect kernels, On the sample complexity of actor-critic method for reinforcement learning with function approximation, Diametrical risk minimization: theory and computations, The role of mutual information in variational classifiers, Optimal granularity selection based on algorithm stability with application to attribute reduction in rough set theory, Tensor networks in machine learning, Post-selection inference via algorithmic stability, Multiple Spectral Kernel Learning and a Gaussian Complexity Computation, Guaranteed Classification via Regularized Similarity Learning, Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery, Stability and optimization error of stochastic gradient descent for pairwise learning, Unnamed Item, Generalization performance of multi-category kernel machines, SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming, Optimization of the richardson integration over fluctuations of its step sizes, On Reject and Refine Options in Multicategory Classification, A variance reduction framework for stable feature selection, The Big Data Newsvendor: Practical Insights from Machine Learning, Algorithmic Stability for Adaptive Data Analysis, Partial differential equation regularization for supervised machine learning, Unnamed Item, Entropy-SGD: biasing gradient descent into wide valleys, Unnamed Item, Unnamed Item, Hausdorff dimension, heavy tails, and generalization in neural networks*, Stop Memorizing: A Data-Dependent Regularization Framework for Intrinsic Pattern Learning, For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability, Data-Driven Optimization: A Reproducing Kernel Hilbert Space Approach, Kernel selection with spectral perturbation stability of kernel matrix, State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings, Fast rates by transferring from auxiliary hypotheses, Coefficient regularized regression with non-iid sampling, Understanding generalization error of SGD in nonconvex optimization, Generalization bounds for averaged classifiers, Tikhonov, Ivanov and Morozov regularization for support vector machine learning, Efficiency of classification methods based on empirical risk minimization, Analysis of classifiers' robustness to adversarial perturbations, Regularized least square regression with dependent samples, Stability of unstable learning algorithms, Learning with sample dependent hypothesis spaces, The optimal solution of multi-kernel regularization learning, Stable multi-label boosting for image annotation with structural feature selection, Approximations and solution estimates in optimization, Multi-kernel regularized classifiers, Local Rademacher complexity: sharper risk bounds with and without unlabeled samples, OCReP: an optimally conditioned regularization for pseudoinversion based neural training, On group-wise \(\ell_p\) regularization: theory and efficient algorithms, Signal recovery by stochastic optimization, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Stability, Differentially private SGD with non-smooth losses, Domain adaptation and sample bias correction theory and algorithm for regression, Communication-efficient distributed multi-task learning with matrix sparsity regularization, Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution, Free dynamics of feature learning processes, Discriminatively learned hierarchical rank pooling networks, Regression learning with non-identically and non-independently sampling, Statistical performance of support vector machines, Spectral Algorithms for Supervised Learning, Robustness and generalization, Uncertainty learning of rough set-based prediction under a holistic framework, Good edit similarity learning by loss minimization, A theoretical framework for deep transfer learning, Primal and dual model representations in kernel-based learning, Growing a list, Oracle inequalities for cross-validation type procedures, Multi-output learning via spectral filtering, Error bounds for \(l^p\)-norm multiple kernel learning with least square loss, Support vector machines with applications, Multiclass classification with potential function rules: margin distribution and generalization, A boosting approach for supervised Mahalanobis distance metric learning, Generalization Bounds for Some Ordinal Regression Algorithms, Transfer bounds for linear feature learning, Composite kernel learning, Training regression ensembles by sequential target correction and resampling, Perturbation to enhance support vector machines for classification., SIRUS: stable and interpretable RUle set for classification, Indefinite kernel network with \(l^q\)-norm regularization, Generalization Error in Deep Learning, How Effectively Train Large-Scale Machine Learning Models?, Generalization performance of bipartite ranking algorithms with convex losses, Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learning, Concept drift detection and adaptation with hierarchical hypothesis testing, Robust pairwise learning with Huber loss, Maximization of AUC and buffered AUC in binary classification, A Bernstein-type inequality for functions of bounded interaction, Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data, Bounding the difference between RankRC and RankSVM and application to multi-level rare class kernel ranking, A tight upper bound on the generalization error of feedforward neural networks, Perturbation of convex risk minimization and its application in differential private learning algorithms, Boosting and instability for regression trees, On qualitative robustness of support vector machines, The consistency of multicategory support vector machines, Additive regularization trade-off: fusion of training and validation levels in kernel methods, A survey of cross-validation procedures for model selection, Stability analysis of learning algorithms for ontology similarity computation, Least-square regularized regression with non-iid sampling, Approximation with polynomial kernels and SVM classifiers, Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization, Predictive inference with the jackknife+, Unnamed Item, Application of integral operator for vector-valued regression learning, Large margin vs. large volume in transductive learning, Leave-one-out cross-validation is risk consistent for Lasso, A selective overview of deep learning, Approximating and learning by Lipschitz kernel on the sphere, Error analysis of multicategory support vector machine classifiers, Stochastic separation theorems, Theory of Classification: a Survey of Some Recent Advances, Learning rates of gradient descent algorithm for classification, A note on application of integral operator in learning theory, Design-unbiased statistical learning in survey sampling, Measuring the Stability of Results From Supervised Statistical Learning, Analysis of support vector machines regression, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, Interpretable machine learning: fundamental principles and 10 grand challenges, Distribution-free consistency of empirical risk minimization and support vector regression, Robustness of reweighted least squares kernel based regression, Kernel learning at the first level of inference, Unnamed Item, Unnamed Item, Unnamed Item, Wasserstein-based fairness interpretability framework for machine learning models, Gromov-Hausdorff stability of linkage-based hierarchical clustering methods, From inexact optimization to learning via gradient concentration, Influence measures for CART classification trees, Discussion of big Bayes stories and BayesBag, Generalization bounds for metric and similarity learning