10.1162/153244303321897690
From MaRDI portal
Publication:4825353
DOI10.1162/153244303321897690zbMath1084.68549OpenAlexW4251115686MaRDI QIDQ4825353
Shahar Mendelson, Bartlett, Peter L.
Publication date: 28 October 2004
Published in: CrossRef Listing of Deleted DOIs (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/153244303321897690
Related Items
Proximal operator and optimality conditions for ramp loss SVM, Kernel selection with spectral perturbation stability of kernel matrix, D-learning to estimate optimal individual treatment rules, State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings, Coupling loss and self-used privileged information guided multi-view transfer learning, Machine learning from a continuous viewpoint. I, Consistency analysis of an empirical minimum error entropy algorithm, Tikhonov, Ivanov and Morozov regularization for support vector machine learning, A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model, Minimum distance Lasso for robust high-dimensional regression, Influence diagnostics in support vector machines, Matrix completion via max-norm constrained optimization, Noisy tensor completion via the sum-of-squares hierarchy, Regularized least square regression with dependent samples, A chain rule for the expected suprema of Gaussian processes, Robust grouped variable selection using distributionally robust optimization, Measuring distributional asymmetry with Wasserstein distance and Rademacher symmetrization, Frameworks and results in distributionally robust optimization, Local Rademacher complexity: sharper risk bounds with and without unlabeled samples, The learning rate of \(l_2\)-coefficient regularized classification with strong loss, IRAHC: instance reduction algorithm using hyperrectangle clustering, Improved multi-view privileged support vector machine, The Vapnik-Chervonenkis dimension of graph and recursive neural networks, Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric, Handling concept drift via model reuse, Fast structured prediction using large margin sigmoid belief networks, Convergence of online pairwise regression learning with quadratic loss, Just interpolate: kernel ``ridgeless regression can generalize, Robustness and generalization, A computational learning theory of active object recognition under uncertainty, Homotopy continuation approaches for robust SV classification and regression, Generalization error bounds for the logical analysis of data, Robust classification via MOM minimization, On the empirical estimation of integral probability metrics, Penalized empirical risk minimization over Besov spaces, Incentive compatible regression learning, The generalization performance of ERM algorithm with strongly mixing observations, Transfer bounds for linear feature learning, A theory of learning from different domains, An improved analysis of the Rademacher data-dependent bound using its self bounding property, Learning with mitigating random consistency from the accuracy measure, The value of agreement a new boosting algorithm, Kernel methods in machine learning, Bayesian fractional posteriors, Generalization performance of bipartite ranking algorithms with convex losses, Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learning, Extreme value correction: a method for correcting optimistic estimations in rule learning, A tight upper bound on the generalization error of feedforward neural networks, Deep ReLU network expression rates for option prices in high-dimensional, exponential Lévy models, Unregularized online learning algorithms with general loss functions, A statistician teaches deep learning, High-dimensional asymptotics of prediction: ridge regression and classification, On uniform concentration bounds for bi-clustering by using the Vapnik-Chervonenkis theory, Learning similarity with cosine similarity ensemble, Robust multicategory support vector machines using difference convex algorithm, Optimal convergence rate of the universal estimation error, Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space, Large-width bounds for learning half-spaces on distance spaces, Reducing mechanism design to algorithm design via machine learning, Convergence rates of learning algorithms by random projection, Learning the coordinate gradients, Sequential complexities and uniform martingale laws of large numbers, Coupling privileged kernel method for multi-view learning, Surrogate losses in passive and active learning, 2D compressed learning: support matrix machine with bilinear random projections, Learning rates of multi-kernel regularized regression, Moment inequalities for functions of independent random variables, A novel multi-view learning developed from single-view patterns, A local Vapnik-Chervonenkis complexity, Topological properties of the set of functions generated by neural networks of fixed size, A survey of randomized algorithms for training neural networks, Fast rates for support vector machines using Gaussian kernels, Inductive matrix completion with feature selection, Online pairwise learning algorithms with convex loss functions, Learning sparse conditional distribution: an efficient kernel-based approach, Learning via variably scaled kernels, A theory of learning with similarity functions, Convergence analysis of kernel canonical correlation analysis: theory and practice, Relaxing support vectors for classification, Generalization bounds for learning with linear, polygonal, quadratic and conic side knowledge, Data-driven estimation in equilibrium using inverse optimization, Kernel ellipsoidal trimming, Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness, High-dimensional dynamics of generalization error in neural networks, Modeling interactive components by coordinate kernel polynomial models, Inference on covariance operators via concentration inequalities: \(k\)-sample tests, classification, and clustering via Rademacher complexities, On strong consistency of kernel \(k\)-means: a Rademacher complexity approach, Belief-based chaotic algorithm for support vector data description, Rademacher complexity in Neyman-Pearson classification, Efficient learning with robust gradient descent, Rademacher complexity for Markov chains: applications to kernel smoothing and Metropolis-Hastings, Scalable Gaussian kernel support vector machines with sublinear training time complexity, A statistical learning perspective on switched linear system identification, Some new copula based distribution-free tests of independence among several random variables, Convolutional spectral kernel learning with generalization guarantees, On the robustness of randomized classifiers to adversarial examples, Compressive sensing and neural networks from a statistical learning perspective, From inexact optimization to learning via gradient concentration, Generalization bounds for metric and similarity learning, The Barron space and the flow-induced function spaces for neural network models, On Robustness of Principal Component Regression, Deep learning: a statistical viewpoint, Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation, Approximation bounds for norm constrained neural networks with applications to regression and GANs, Tighter guarantees for the compressive multi-layer perceptron, Deep empirical risk minimization in finance: Looking into the future, Interpolation consistency training for semi-supervised learning, A class of dimension-free metrics for the convergence of empirical measures, A Deep Generative Approach to Conditional Sampling, Statistical guarantees for regularized neural networks, Robust cost-sensitive kernel method with Blinex loss and its applications in credit risk evaluation, Robust partially linear trend filtering for regression estimation and structure discovery, PAC-learning with approximate predictors, Minimax rates for conditional density estimation via empirical entropy, High-probability generalization bounds for pointwise uniformly stable algorithms, Incomplete-view oriented kernel learning method with generalization error bound, Sampling rates for \(\ell^1\)-synthesis, Data-adaptive discriminative feature localization with statistically guaranteed interpretation, Theory of graph neural networks: representation and learning, Tropical Support Vector Machines: Evaluations and Extension to Function Spaces, Adaptive metric dimensionality reduction, On the information bottleneck theory of deep learning, The committee machine: computational to statistical gaps in learning a two-layers neural network, Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup*, Online regularized learning with pairwise loss functions, A General Framework for Dimensionality-Reducing Data Visualization Mapping, Fast rates by transferring from auxiliary hypotheses, Efficient kernel-based variable selection with sparsistency, Regularization via Mass Transportation, When Do Extended Physics-Informed Neural Networks (XPINNs) Improve Generalization?, On nonparametric classification with missing covariates, On the proliferation of support vectors in high dimensions*, Mean-field inference methods for neural networks, Complexity of pattern classes and the Lipschitz property, Nonlinear Variable Selection via Deep Neural Networks, Application of integral operator for regularized least-square regression, Error bounds of multi-graph regularized semi-supervised classification, Benign overfitting in linear regression, Graphical Convergence of Subgradients in Nonconvex Optimization and Learning, Imaging conductivity from current density magnitude using neural networks*, \(L_{p}\)-norm Sauer-Shelah lemma for margin multi-category classifiers, Learning with Boundary Conditions, Logarithmic sample bounds for sample average approximation with capacity- or budget-constraints, Analysis of the generalization ability of a full decision tree, Multi-objective Parameter Synthesis in Probabilistic Hybrid Systems, Switching: understanding the class-reversed sampling in tail sample memorization, Unnamed Item, Regularized learning schemes in feature Banach spaces, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Learning bounds for quantum circuits in the agnostic setting, Statistical performance of support vector machines, Can neural networks extrapolate? Discussion of a theorem by Pedro Domingos, High-dimensional estimation with geometric constraints: Table 1., A kernel approach to multi-task learning with task-specific kernels, A Hilbert Space Embedding for Distributions, Unnamed Item, Greedy training algorithms for neural networks and applications to PDEs, Multiple Spectral Kernel Learning and a Gaussian Complexity Computation, Guaranteed Classification via Regularized Similarity Learning, Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem, Theory and Algorithms for Shapelet-Based Multiple-Instance Learning, On Kernel Method–Based Connectionist Models and Supervised Deep Learning Without Backpropagation, Online Pairwise Learning Algorithms, Robust Support Vector Machines for Classification with Nonconvex and Smooth Losses, Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes, Generalization Analysis of Fredholm Kernel Regularized Classifiers, Boosting Method for Local Learning in Statistical Pattern Recognition, U-Processes and Preference Learning, Structure from Randomness in Halfspace Learning with the Zero-One Loss, Generalization Error in Deep Learning, Nonstationary Bandits with Habituation and Recovery Dynamics, Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Persistent homology for low-complexity models, On the rate of convergence for multi-category classification based on convex losses, Concentration inequalities and asymptotic results for ratio type empirical processes, Unnamed Item, Unnamed Item, Unnamed Item, Application of integral operator for vector-valued regression learning, Rademacher Chaos Complexities for Learning the Kernel Problem, Deep Convolutional Framelets: A General Deep Learning Framework for Inverse Problems, Variance-based regularization with convex objectives, Feature augmentation for the inversion of the Fourier transform with limited data, Error analysis of multicategory support vector machine classifiers, Theory of Classification: a Survey of Some Recent Advances, Learning rates of gradient descent algorithm for classification, Generalization analysis of multi-modal metric learning, A permutation approach to validation*, Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas, Inverse Optimization with Noisy Data, Low-Rank Approximation and Completion of Positive Tensors, A Vector-Contraction Inequality for Rademacher Complexities, Structural Online Learning, Adaptive Relevance Matrices in Learning Vector Quantization, Permutational Rademacher Complexity, Error bounds for learning the kernel, Unnamed Item, Unnamed Item, Sparse additive machine with ramp loss, Unnamed Item, Unnamed Item, Maximal margin classification for metric spaces, Multiple Testing with the Structure-Adaptive Benjamini–Hochberg Algorithm, Unnamed Item, Unnamed Item, Nonasymptotic bounds for vector quantization in Hilbert spaces, Local Rademacher complexities, Boosting with early stopping: convergence and consistency, Generalisation error in learning with random features and the hidden manifold model*, Stop Memorizing: A Data-Dependent Regularization Framework for Intrinsic Pattern Learning, Entropy and Concentration