On the mathematical foundations of learning
From MaRDI portal
Publication:2761194
DOI10.1090/S0273-0979-01-00923-5zbMath0983.68162WikidataQ56387242 ScholiaQ56387242MaRDI QIDQ2761194
Publication date: 17 December 2001
Published in: Bulletin of the American Mathematical Society (Search for Journal in Brave)
Learning and adaptive systems in artificial intelligence (68T05) Coding and information theory (compaction, compression, models of communication, encoding schemes, etc.) (aspects in computer science) (68P30)
Related Items
Tensors in computations, Random sampling in reproducing kernel spaces with mixed norm, Convergence and finite sample approximations of entropic regularized Wasserstein distances in Gaussian and RKHS settings, An Online Projection Estimator for Nonparametric Regression in Reproducing Kernel Hilbert Spaces, Consistency of Monte Carlo estimators for risk-neutral PDE-constrained optimization, Learning theory for inferring interaction kernels in second-order interacting agent systems, Semi-discrete Tikhonov regularization in RKHS with large randomly distributed noise, On the approximation of functions by tanh neural networks, Random sampling of signals concentrated on compact set in localized reproducing kernel subspace of \(L^p (\mathbb{R}^n)\), Attaining the Chebyshev bound for optimal learning: a numerical algorithm, Robust optimal estimation of location from discretely sampled functional data, Unified Tests for Nonparametric Functions in RKHS With Kernel Selection and Regularization, Ultra-quantum coherent states in a single finite quantum system, Random average sampling in a reproducing kernel subspace of mixed Lebesgue space \(L^{p,q}(\mathbb{R}^{n+1})\), Approximation Analysis of Convolutional Neural Networks, Learning dynamical systems from data: a simple cross-validation perspective. IV: Case with partial observations, Sparse machine learning in Banach spaces, A COLLOCATION METHOD FOR FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND VIA ITERATIVE REGULARIZATION SCHEME, Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation, Learning sparse and smooth functions by deep sigmoid nets, Accelerating metabolic models evaluation with statistical metamodels: application to Salmonella infection models, Domain Generalization by Functional Regression, Learning system parameters from Turing patterns, Random sampling and reconstruction in reproducing kernel subspace of mixed Lebesgue spaces, Learning rates of multitask kernel methods, Gaussian kernels on nonsimply connected closed Riemannian manifolds are never positive definite, Nonexact oracle inequalities, \(r\)-learnability, and fast rates, On the coercivity condition in the learning of interacting particle systems, A duality approach to regularized learning problems in Banach spaces, Harmonic analysis of network systems via kernels and their boundary realizations, Sampling discretization error of integral norms for function classes with small smoothness, Deep learning theory of distribution regression with CNNs, Ensemble forecasts in reproducing kernel Hilbert space family, Random Sampling of Mellin Band-Limited Signals, Sample Size Estimates for Risk-Neutral Semilinear PDE-Constrained Optimization, A close look at the entropy numbers of the unit ball of the reproducing Hilbert space of isotropic positive definite kernels, Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem, Support vector machines regression with unbounded sampling, Refined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert Spaces, FastMMD: Ensemble of Circular Discrepancy for Efficient Two-Sample Test, A Note on Support Vector Machines with Polynomial Kernels, Online Pairwise Learning Algorithms, Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes, Generalization Analysis of Fredholm Kernel Regularized Classifiers, Random sampling and reconstruction in multiply generated shift-invariant spaces, Generalized Mercer Kernels and Reproducing Kernel Banach Spaces, Fluctuation-Dissipation Theorem and Models of Learning, A classification method based on generalized eigenvalue problems, Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory, Learning with Kernels and Logical Representations, Rejoinder, Minimum norm interpolation in the ℓ1(ℕ) space, Finite Sample Approximations of Exact and Entropic Wasserstein Distances Between Covariance Operators and Gaussian Processes, Approximative Policy Iteration for Exit Time Feedback Control Problems Driven by Stochastic Differential Equations using Tensor Train Format, Stochastic Difference-of-Convex-Functions Algorithms for Nonconvex Programming, Grothendieck bound in a single quantum system, Convergence bounds for empirical nonlinear least-squares, Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations, LOCAL LEARNING ESTIMATES BY INTEGRAL OPERATORS, Generalized Gramians: Creating frame vectors in maximal subspaces, Projection-based curve clustering, Online regularized pairwise learning with non-i.i.d. observations, Shannon sampling and function reconstruction from point values, On Different Facets of Regularization Theory, Generalized System Identification with Stable Spline Kernels, Estimation of Entropy and Mutual Information, Leave-One-Out Bounds for Kernel Methods, Quantum machine learning: a classical perspective, Semi-Infinite Linear Regression and Its Applications, Kernel-Based Discretization for Solving Matrix-Valued PDEs, Mathematical diagnostics via nonsmooth analysis, Unnamed Item, Are Loss Functions All the Same?, Trading Variance Reduction with Unbiasedness: The Regularized Subspace Information Criterion for Robust Model Selection in Kernel Regression, Imaging conductivity from current density magnitude using neural networks*, AN ERROR ANALYSIS OF LAVRENTIEV REGULARIZATION IN LEARNING THEORY, Full error analysis for the training of deep neural networks, Operator-valued positive definite kernels and differentiable universality, Approximating Optimal feedback Controllers of Finite Horizon Control Problems Using Hierarchical Tensor Formats, Learning with Boundary Conditions, Weighted random sampling and reconstruction in general multivariate trigonometric polynomial spaces, Kernel Methods for the Approximation of Nonlinear Systems, Consistency of learning algorithms using Attouch–Wets convergence, Regularized learning schemes in feature Banach spaces, On the K-functional in learning theory, Unnamed Item, Unnamed Item, On invariance and selectivity in representation learning, Sparse Support Vector Machines in Reproducing Kernel Banach Spaces, Learning with Convex Loss and Indefinite Kernels, Statistical estimation of measure invariants, Applications of the Bernstein-Durrmeyer operators in estimating the norm of Mercer kernel matrices, Minimization of Error Functionals over Perceptron Networks, On the entropy numbers between the anisotropic spaces and the spaces of functions with mixed smoothness, Convergence rate of SVM for kernel-based robust regression, On probabilistic convergence rates of stochastic Bernstein polynomials, CONVERGENCE OF A LEAST‐SQUARES MONTE CARLO ALGORITHM FOR AMERICAN OPTION PRICING WITH DEPENDENT SAMPLE DATA, On the Decay Rate of the Singular Values of Bivariate Functions, Multivariate Monte Carlo Approximation Based on Scattered Data, Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data, Optimal rate of the regularized regression learning algorithm, Blessing of dimensionality: mathematical foundations of the statistical physics of data, CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY, Non Asymptotic Minimax Estimation of Functionals with Noisy Observations, Unnamed Item, MULTIVARIATE CALIBRATION WITH SUPPORT VECTOR REGRESSION BASED ON RANDOM PROJECTION, ON THE INCLUSION RELATION OF REPRODUCING KERNEL HILBERT SPACES, REGULARIZED LEAST SQUARE ALGORITHM WITH TWO KERNELS, Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel, Sobolev error estimates and a priori parameter selection for semi-discrete Tikhonov regularization, SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming, Unnamed Item, Sampling and Stability, REGULARIZED LEAST SQUARE REGRESSION WITH SPHERICAL POLYNOMIAL KERNELS, LEARNING RATES OF REGULARIZED REGRESSION FOR FUNCTIONAL DATA, 𝑊-Markov measures, transfer operators, wavelets and multiresolutions, Least Square Regression with lp-Coefficient Regularization, Error analysis of multicategory support vector machine classifiers, Applications of Bernstein-Durrmeyer operators in estimating the covering number, ONLINE LEARNING WITH MARKOV SAMPLING, Reconstruction from convolution random sampling in local shift invariant spaces, Theory of Classification: a Survey of Some Recent Advances, Nonuniform sampling, reproducing kernels, and the associated Hilbert spaces, DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION, The Random Feature Model for Input-Output Maps between Banach Spaces, Performance analysis of the LapRSSLG algorithm in learning theory, Analysis of Regression Algorithms with Unbounded Sampling, Generalized linear models for geometrical current predictors: An application to predict garment fit, GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY, ALGORITHMIC STABILITY AND DATA QUALITY, General Error Estimates for the Longstaff–Schwartz Least-Squares Monte Carlo Algorithm, Convergence of knowledge in a stochastic cultural evolution model with population structure, social learning and credibility biases, ANALYSIS OF CLASSIFICATION WITH A REJECT OPTION, Error Estimates for Multivariate Regression on Discretized Function Spaces, Convergence of a Least‐Squares Monte Carlo Algorithm for Bounded Approximating Sets, A NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORY, Kernels for sequentially ordered data, Kernel partial least squares for stationary data, VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES AND UNIVERSALITY, Reproducing kernel Hilbert spaces on manifolds: Sobolev and diffusion spaces, Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression, Generalization Error of Minimum Weighted Norm and Kernel Interpolation, Unnamed Item, Unnamed Item, Estimates of learning rates of regularized regression via polyline functions, Enhancing Accuracy of Deep Learning Algorithms by Training with Low-Discrepancy Sequences, Optimal rate for support vector machine regression with Markov chain samples, VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES OF INTEGRABLE FUNCTIONS AND MERCER THEOREM, Shifting artificial data to detect system failures, Unnamed Item, A multi-level procedure for enhancing accuracy of machine learning algorithms, Error analysis of the kernel regularized regression based on refined convex losses and RKBSs, Optimal Approximation with Sparsely Connected Deep Neural Networks, Sequential Sampling for Optimal Weighted Least Squares Approximations in Hierarchical Spaces, Regularization: From Inverse Problems to Large-Scale Machine Learning, Analysis of k-partite ranking algorithm in area under the receiver operating characteristic curve criterion, Data-Driven Optimization: A Reproducing Kernel Hilbert Space Approach, The covering number in learning theory, Machine learning with kernels for portfolio valuation and risk management, Error analysis on Hérmite learning with gradient data, On the stability of reproducing kernel Hilbert spaces of discrete-time impulse responses, The effect of structure on image classification using signatures, Explicit upper bound for entropy numbers, Deep microlocal reconstruction for limited-angle tomography, Sampling discretization and related problems, Nonparametric regression using needlet kernels for spherical data, On Gaussian kernels on Hilbert spaces and kernels on hyperbolic spaces, Minoration via mixed volumes and Cover's problem for general channels, On kernel design for regularized LTI system identification, Divergence-free quasi-interpolation, Kernel methods for the approximation of some key quantities of nonlinear systems, Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories, Distributed learning via filtered hyperinterpolation on manifolds, An efficient kernel learning algorithm for semisupervised regression problems, Learning rate of distribution regression with dependent samples, A computationally efficient scheme for feature extraction with kernel discriminant analysis, Fisher-regularized support vector machine, Manifold regularization based on Nyström type subsampling, Piecewise linear approximation methods with stochastic sampling sites, On spectral windows in supervised learning from data, Learning rates for least square regressions with coefficient regularization, An extension of Mercer's theory to \(L^p\), Echo state networks are universal, Spectral approach for kernel-based interpolation, Extrapolation estimates for entropy numbers, Generalization errors of Laplacian regularized least squares regression, Efficient spatio-temporal Gaussian regression via Kalman filtering, On the mathematical foundations of stable RKHSs, Learning rates for the kernel regularized regression with a differentiable strongly convex loss, Convergence of online pairwise regression learning with quadratic loss, Approximation of Lyapunov functions from noisy data, On the convergence rate of kernel-based sequential greedy regression, State-space LPV model identification using kernelized machine learning, Random sampling in reproducing kernel subspaces of \(L^p(\mathbb{R}^n)\), Optimal classification of Gaussian processes in homo- and heteroscedastic settings, A meta-learning approach to the regularized learning -- case study: blood glucose prediction, Analysis of convergence performance of neural networks ranking algorithm, ERM learning with unbounded sampling, Robust regression using biased objectives, The regularized least squares algorithm and the problem of learning halfspaces, Nonparametric distributed learning under general designs, Correction of AI systems by linear discriminants: probabilistic foundations, Sampling scattered data with Bernstein polynomials: stochastic and deterministic error estimates, Concentration estimates for learning with unbounded sampling, Primal and dual model representations in kernel-based learning, Optimal regression rates for SVMs using Gaussian kernels, Convergence rate and Bahadur type representation of general smoothing spline M-estimates, Fast construction of correcting ensembles for legacy artificial intelligence systems: algorithms and a case study, Adaptive kernel methods using the balancing principle, The generalization performance of ERM algorithm with strongly mixing observations, Fast learning of relational kernels, A note on different covering numbers in learning theory., Constructive analysis for least squares regression with generalized \(K\)-norm regularization, Applied harmonic analysis and data processing. Abstracts from the workshop held March 25--31, 2018, Regularization in a functional reproducing kernel Hilbert space, Regularity of Gaussian processes on Dirichlet spaces, On the identifiability of interaction functions in systems of interacting particles, Reduced rank modeling for functional regression with functional responses, Support vector machines regression with \(l^1\)-regularizer, Positive definite dot product kernels in learning theory, Computational complexity of the integration problem for anisotropic classes, Iterative surrogate model optimization (ISMO): an active learning algorithm for PDE constrained optimization with deep neural networks, On the representation of smooth functions on the sphere using finitely many bits, Semi-supervised learning based on high density region estimation, Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling, Probability error bounds for approximation of functions in reproducing kernel Hilbert spaces, Image and video colorization using vector-valued reproducing kernel Hilbert spaces, Topological properties of the set of functions generated by neural networks of fixed size, Fast rates for support vector machines using Gaussian kernels, Reproducing kernel Hilbert space compactification of unitary evolution groups, Random sampling and reconstruction of concentrated signals in a reproducing kernel space, System identification using kernel-based regularization: new insights on stability and consistency issues, Estimates of the approximation error using Rademacher complexity: Learning vector-valued functions, When is there a representer theorem? Reflexive Banach spaces, Numerical solution of the parametric diffusion equation by deep neural networks, Solving the Kolmogorov PDE by means of deep learning, A direct approach for function approximation on data defined manifolds, Analysis of support vector machines regression, Optimal stochastic Bernstein polynomials in Ditzian-Totik type modulus of smoothness, Learning from uniformly ergodic Markov chains, Curve based approximation of measures on manifolds by discrepancy minimization, Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions, Learning dynamical systems from data: a simple cross-validation perspective. I: Parametric kernel flows, Learning rates of least-square regularized regression with polynomial kernels, Quasi-interpolation for multivariate density estimation on bounded domain, Estimates of the norm of the Mercer kernel matrices with discrete orthogonal transforms, Generalization performance of graph-based semi-supervised classification, Distribution-free consistency of empirical risk minimization and support vector regression, Learning a function from noisy samples at a finite sparse set of points, On the speed of uniform convergence in Mercer's theorem, Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices, Stochastic quasi-interpolation with Bernstein polynomials, Sharp estimates for the covering numbers of the Weierstrass fractal kernel, Understanding neural networks with reproducing kernel Banach spaces, A sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoids, One-shot learning of stochastic differential equations with data adapted kernels, A theoretical analysis of deep neural networks and parametric PDEs, Computational Advantages of Reverberating Loops for Sensorimotor Learning, Coefficient regularized regression with non-iid sampling, Binary separation and training support vector machines, Reproducing properties of differentiable Mercer-like kernels, Complexity control in statistical learning, Consistent identification of Wiener systems: a machine learning viewpoint, ERM learning algorithm for multi-class classification, Optimal shift invariant spaces and their Parseval frame generators, Estimates of covering numbers of convex sets with slowly decaying orthogonal subsets, Generalized semi-inner products with applications to regularized learning, THE COEFFICIENT REGULARIZED REGRESSION WITH RANDOM PROJECTION, Minimax optimal rates of convergence for multicategory classifications, Learning with sample dependent hypothesis spaces, Application of integral operator for regularized least-square regression, Error bounds of multi-graph regularized semi-supervised classification, The optimal solution of multi-kernel regularization learning, The convergence rates of Shannon sampling learning algorithms, Reproducing kernels: harmonic analysis and some of their applications, Multi-robots Gaussian estimation and coverage control: from client-server to peer-to-peer architectures, Statistical learning guarantees for compressive clustering and compressive mixture modeling, Kernel-based methods for Volterra series identification, On semiseparable kernels and efficient implementation for regularized system identification and function estimation, Kernel-based online gradient descent using distributed approach, Modal additive models with data-driven structure identification, Consistency and convergence rate of phylogenetic inference via regularization, Functional reproducing kernel Hilbert spaces for non-point-evaluation functional data, On reproducing kernel Banach spaces: generic definitions and unified framework of constructions, Learning dynamical systems from data: a simple cross-validation perspective. III: Irregularly-sampled time series, A sharp upper bound for sampling numbers in \(L_2\), Learning high-dimensional parametric maps via reduced basis adaptive residual networks, Regularized minimal-norm solution of an overdetermined system of first kind integral equations, The convergence rate of semi-supervised regression with quadratic loss, The existence and uniqueness of solutions for kernel-based system identification, When is there a representer theorem? Nondifferentiable regularisers and Banach spaces, Spherical basis functions and uniform distribution of points on spheres, On approximation by reproducing kernel spaces in weighted \(L^p\) spaces, Ranking and empirical minimization of \(U\)-statistics, Learning rates of regularized regression for exponentially strongly mixing sequence, A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization, A kernel approach to multi-task learning with task-specific kernels, Low-rank kernel approximation of Lyapunov functions using neural networks, Learning dynamical systems using local stability priors, Convergence analysis of online learning algorithm with two-stage step size, \(L_1\)-quantization and clustering in Banach spaces, A note on microlocal kernel design for some slow-fast stochastic differential equations with critical transitions and application to EEG signals, Entropy numbers of Besov classes of generalized smoothness on the sphere, Optimal experimental design and some related control problems, Solutions of nonlinear control and estimation problems in reproducing kernel Hilbert spaces: existence and numerical determination, The performance of semi-supervised Laplacian regularized regression with the least square loss, Supervised classification of geometrical objects by integrating currents and functional data analysis, Minimax adaptive dimension reduction for regression, Distributed learning and distribution regression of coefficient regularization, Convergence analysis of online algorithms, Bayes and empirical Bayes semi-blind deconvolution using eigenfunctions of a prior covariance, The performance bounds of learning machines based on exponentially strongly mixing sequences, On the estimation of regression functions, Reproducing kernels and choices of associated feature spaces, in the form of \(L^2\)-spaces, Learning performance of regularized regression with multiscale kernels based on Markov observations, Almost optimal estimates for approximation and learning by radial basis function networks, Convergence rates of learning algorithms by random projection, Learning sets with separating kernels, The covering number for some Mercer kernel Hilbert spaces, Approximation by neural networks and learning theory, The consistency of multicategory support vector machines, Approximation of kernel matrices by circulant matrices and its application in kernel selection methods, The weight-decay technique in learning from data: an optimization point of view, Approximation with polynomial kernels and SVM classifiers, Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization, Sampling discretization error of integral norms for function classes, The method of virtual experts in mathematical diagnostics, Application of integral operator for vector-valued regression learning, Supervised Learning by Support Vector Machines, Nonlinear adaptive filtering using kernel-based algorithms with dictionary adaptation, Pseudo-dimension and entropy of manifolds formed by affine-invariant dictionary, High-dimensional brain: a tool for encoding and rapid learning of memories by single neurons, On combining machine learning with decision making, Approximating and learning by Lipschitz kernel on the sphere, Conditional probability estimation based classification with class label missing at random, Learning rates of gradient descent algorithm for classification, Convergence and consistency of ERM algorithm with uniformly ergodic Markov chain samples, Coefficient-based regularized regression with dependent and unbounded sampling, Kernel Principal Component Analysis: Applications, Implementation and Comparison, Sparse RKHS estimation via globally convex optimization and its application in LPV-IO identification, The Mathematical Analysis of Biological Aggregation and Dispersal: Progress, Problems and Perspectives, Universalities of reproducing kernels revisited, The learning rates of regularized regression based on reproducing kernel Banach spaces, Density problem and approximation error in learning theory, Positive-definiteness, integral equations and Fourier transforms, Fast Boolean approximation methods for solving integral equations in high dimensions, Self-interacting diffusions. III: Symmetric interactions, INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING, Half supervised coefficient regularization for regression learning with unbounded sampling, Approximate interpolation with applications to selecting smoothing parameters, Generalization performance of Gaussian kernels SVMC based on Markov sampling, Extreme learning machine for ranking: generalization analysis and applications, A linear algebraic approach to holomorphic reproducing kernels in \(\mathbb C^n\), On universal estimators in learning theory, Shannon sampling. II: Connections to learning theory, Rates of minimization of error functionals over Boolean variable-basis functions, Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory, Optimal sampling points in reproducing kernel Hilbert spaces, Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint, Generalization performance of Lagrangian support vector machine based on Markov sampling, Gaussian approximations in high dimensional estimation, Multi-penalty regularization in learning theory, Accuracy of suboptimal solutions to kernel principal component analysis, Weyl eigenvalue asymptotics and sharp adaptation on vector bundles, Geometry on probability spaces, Nonparametric stochastic approximation with large step-sizes, Efficiency of classification methods based on empirical risk minimization, Upper bounds for errors of estimators in a problem of nonparametric regression: the adaptive case and the case of unknown measure \(\rho _X\), Regularization in kernel learning, Entropy numbers of functions on \([-1,1\) with Jacobi weights], Nonlinear projection recovery in digital inpainting for color image restoration, Terminated Ramp--Support Vector machines: A nonparametric data dependent kernel, Input estimation in nonlinear dynamical systems using differential algebra techniques, Nonlinear function approximation: computing smooth solutions with an adaptive greedy algorithm, Regression models for functional data by reproducing kernel Hilbert spaces methods, Distributed parametric and nonparametric regression with on-line performance bounds computation, On regularization algorithms in learning theory, Multi-kernel regularized classifiers, Learning rates of regularized regression on the unit sphere, Reproducing kernel Hilbert spaces associated with kernels on topological spaces, Universally consistent vertex classification for latent positions graphs, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Integral operator approach to learning theory with unbounded sampling, An oracle inequality for regularized risk minimizers with strongly mixing observations, The learning rate of \(l_2\)-coefficient regularized classification with strong loss, Random sampling of bandlimited functions, Metamodeling the learning-hiding competition between tumours and the immune system: A kinematic approach, Least squares regression with \(l_1\)-regularizer in sum space, Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels, Functional analysis techniques to improve similarity matrices in discrimination problems, Comparing fixed and variable-width Gaussian networks, Estimates on compressed neural networks regression, Least square regression with indefinite kernels and coefficient regularization, Learning gradients via an early stopping gradient descent method, Generalization bounds of ERM algorithm with Markov chain samples, Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations, Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points, A reproducing kernel Hilbert space approach to functional linear regression, Non-negativity constraints on the pre-image for pattern recognition with kernel machines, On complex-valued 2D eikonals. IV: continuation past a caustic, Prediction error identification of linear systems: a nonparametric Gaussian regression approach, Statistical analysis of kernel-based least-squares density-ratio estimation, Learning from regularized regression algorithms with \(p\)-order Markov chain sampling, Direct learning of LPV controllers from data, Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains, Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs, Convexity condition in Cucker-Smale theorems in the theory of teaching, Consistency analysis of spectral regularization algorithms, Estimation of convergence rate for multi-regression learning algorithm, Semi-supervised learning with the help of Parzen windows, Error bounds for \(l^p\)-norm multiple kernel learning with least square loss, Support vector machines with applications, Convergence rate of the semi-supervised greedy algorithm, Generalization ability of fractional polynomial models, Kernel methods in system identification, machine learning and function estimation: a survey, On the modeling of collective learning dynamics, Provably fast training algorithms for support vector machines, Compressed classification learning with Markov chain samples, Multilevel augmentation algorithms based on fast collocation methods for solving ill-posed integral equations, Generalization performance of least-square regularized regression algorithm with Markov chain samples, Behavior of a functional in learning theory, Generalization performance of bipartite ranking algorithms with convex losses, On adaptive estimators in statistical learning theory, Relevant sampling in finitely generated shift-invariant spaces, An instrumental least squares support vector machine for nonlinear system identification, Constructive analysis for coefficient regularization regression algorithms, Learning rates for kernel-based expectile regression, Estimation of the misclassification error for multicategory support vector machine classification, On conditionally positive definite dot product kernels, Perturbation of convex risk minimization and its application in differential private learning algorithms, Orthogonality from disjoint support in reproducing kernel Hilbert spaces, Consistent learning by composite proximal thresholding, Classification with polynomial kernels and \(l^1\)-coefficient regularization, Learning rates for regularized classifiers using multivariate polynomial kernels, Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions, Random sampling in shift invariant spaces, The convergence rate for a \(K\)-functional in learning theory, Regularized learning in Banach spaces as an optimization problem: representer theorems, Consistency of regularized spectral clustering, Learning rates for multi-kernel linear programming classifiers, Multivariate Bernstein-Durrmeyer operators with arbitrary weight functions, Frames, Riesz bases, and sampling expansions in Banach spaces via semi-inner products, Estimates of variation with respect to a set and applications to optimization problems, Sparse discretization matrices for Volterra integral operators with applications to numerical differentiation, Approximation of frame based missing data recovery, Learning rates of multi-kernel regularized regression, Learning errors of linear programming support vector regression, Covering numbers of Gaussian reproducing kernel Hilbert spaces, Finite dimensional approximation and Newton-based algorithm for stochastic approximation in Hilbert space, Mercer theorem for RKHS on noncompact sets, Learning with generalization capability by kernel methods of bounded complexity, Monte Carlo algorithms for optimal stopping and statistical learning, A new kernel-based approach for linear system identification, Regression in Tensor Product Spaces by the Method of Sieves, The geometry of quantum learning, Covering compact metric spaces greedily, Koopman spectra in reproducing kernel Hilbert spaces
Cites Work
- On the parabolic kernel of the Schrödinger operator
- Multivariate interpolation at arbitrary points made simple
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation
- Rates of convex approximation in non-Hilbert spaces
- Regularization networks and support vector machines
- A theory of the learnable
- COMPLEXITY AND REAL COMPUTATION: A MANIFESTO
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- The importance of convexity in learning with squared loss
- Newton's method for overdetermined systems of equations
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
- Theory of Reproducing Kernels
- A logical calculus of the ideas immanent in nervous activity
- Convergence of stochastic processes
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item