10.1162/153244302760200704
From MaRDI portal
Publication:4779564
DOI10.1162/153244302760200704zbMath1007.68083OpenAlexW2139338362MaRDI QIDQ4779564
Olivier Bousquet, André Elisseeff
Publication date: 27 November 2002
Published in: CrossRef Listing of Deleted DOIs (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/153244302760200704
Computational learning theory (68Q32) Nonnumerical algorithms (68W05) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (only showing first 100 items - show all)
Generalization bounds for regularized portfolio selection with market side information ⋮ Deep learning: a statistical viewpoint ⋮ Stability and generalization of graph convolutional networks in eigen-domains ⋮ On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent ⋮ Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent ⋮ Data-Mining Homogeneous Subgroups in Multiple Regression When Heteroscedasticity, Multicollinearity, and Missing Variables Confound Predictor Effects ⋮ Leave-One-Out Bounds for Kernel Methods ⋮ Graphical Convergence of Subgradients in Nonconvex Optimization and Learning ⋮ Trading Variance Reduction with Unbiasedness: The Regularized Subspace Information Criterion for Robust Model Selection in Kernel Regression ⋮ Benefit of Interpolation in Nearest Neighbor Algorithms ⋮ Conditional predictive inference for stable algorithms ⋮ The role of optimization in some recent advances in data-driven decision-making ⋮ Average Sensitivity of Graph Algorithms ⋮ Variance reduced Shapley value estimation for trustworthy data valuation ⋮ From undecidability of non-triviality and finiteness to undecidability of learnability ⋮ Neural ODE Control for Classification, Approximation, and Transport ⋮ SOCKS: A Stochastic Optimal Control and Reachability Toolbox Using Kernel Methods ⋮ Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems ⋮ Lotka-Volterra model with mutations and generative adversarial networks ⋮ High-probability generalization bounds for pointwise uniformly stable algorithms ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Optimality of regularized least squares ranking with imperfect kernels ⋮ On the sample complexity of actor-critic method for reinforcement learning with function approximation ⋮ Diametrical risk minimization: theory and computations ⋮ The role of mutual information in variational classifiers ⋮ Optimal granularity selection based on algorithm stability with application to attribute reduction in rough set theory ⋮ Tensor networks in machine learning ⋮ Post-selection inference via algorithmic stability ⋮ Multiple Spectral Kernel Learning and a Gaussian Complexity Computation ⋮ Guaranteed Classification via Regularized Similarity Learning ⋮ Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery ⋮ Stability and optimization error of stochastic gradient descent for pairwise learning ⋮ Unnamed Item ⋮ Generalization performance of multi-category kernel machines ⋮ SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming ⋮ A Statistical Learning Theory Approach for the Analysis of the Trade-off Between Sample Size and Precision in Truncated Ordinary Least Squares ⋮ Multi-relational graph convolutional networks: generalization guarantees and experiments ⋮ Stability analysis of stochastic gradient descent for homogeneous neural networks and linear classifiers ⋮ Stability is stable: connections between replicability, privacy, and adaptive generalization ⋮ Optimization of the richardson integration over fluctuations of its step sizes ⋮ On Reject and Refine Options in Multicategory Classification ⋮ A variance reduction framework for stable feature selection ⋮ The Big Data Newsvendor: Practical Insights from Machine Learning ⋮ Algorithmic Stability for Adaptive Data Analysis ⋮ Partial differential equation regularization for supervised machine learning ⋮ Unnamed Item ⋮ Entropy-SGD: biasing gradient descent into wide valleys ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Hausdorff dimension, heavy tails, and generalization in neural networks* ⋮ Stop Memorizing: A Data-Dependent Regularization Framework for Intrinsic Pattern Learning ⋮ For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability ⋮ Data-Driven Optimization: A Reproducing Kernel Hilbert Space Approach ⋮ Kernel selection with spectral perturbation stability of kernel matrix ⋮ State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings ⋮ Fast rates by transferring from auxiliary hypotheses ⋮ Coefficient regularized regression with non-iid sampling ⋮ Understanding generalization error of SGD in nonconvex optimization ⋮ Generalization bounds for averaged classifiers ⋮ Tikhonov, Ivanov and Morozov regularization for support vector machine learning ⋮ Efficiency of classification methods based on empirical risk minimization ⋮ Analysis of classifiers' robustness to adversarial perturbations ⋮ Regularized least square regression with dependent samples ⋮ Stability of unstable learning algorithms ⋮ Learning with sample dependent hypothesis spaces ⋮ The optimal solution of multi-kernel regularization learning ⋮ Stable multi-label boosting for image annotation with structural feature selection ⋮ Approximations and solution estimates in optimization ⋮ Multi-kernel regularized classifiers ⋮ Local Rademacher complexity: sharper risk bounds with and without unlabeled samples ⋮ OCReP: an optimally conditioned regularization for pseudoinversion based neural training ⋮ On group-wise \(\ell_p\) regularization: theory and efficient algorithms ⋮ Signal recovery by stochastic optimization ⋮ Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels ⋮ Stability ⋮ Differentially private SGD with non-smooth losses ⋮ Domain adaptation and sample bias correction theory and algorithm for regression ⋮ Communication-efficient distributed multi-task learning with matrix sparsity regularization ⋮ Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution ⋮ Free dynamics of feature learning processes ⋮ Discriminatively learned hierarchical rank pooling networks ⋮ Regression learning with non-identically and non-independently sampling ⋮ Statistical performance of support vector machines ⋮ Spectral Algorithms for Supervised Learning ⋮ Robustness and generalization ⋮ Uncertainty learning of rough set-based prediction under a holistic framework ⋮ Good edit similarity learning by loss minimization ⋮ A theoretical framework for deep transfer learning ⋮ Primal and dual model representations in kernel-based learning ⋮ Growing a list ⋮ Oracle inequalities for cross-validation type procedures ⋮ Multi-output learning via spectral filtering ⋮ Error bounds for \(l^p\)-norm multiple kernel learning with least square loss ⋮ Support vector machines with applications ⋮ Multiclass classification with potential function rules: margin distribution and generalization ⋮ A boosting approach for supervised Mahalanobis distance metric learning ⋮ Generalization Bounds for Some Ordinal Regression Algorithms ⋮ Transfer bounds for linear feature learning
This page was built for publication: 10.1162/153244302760200704