DOI10.1017/CBO9780511618796zbMath1274.41001OpenAlexW4245558064MaRDI QIDQ3426914
Ding-Xuan Zhou, Felipe Cucker
Publication date: 13 March 2007
Full work available at URL: https://doi.org/10.1017/cbo9780511618796
Machine learning with kernels for portfolio valuation and risk management ⋮
Online gradient descent algorithms for functional data learning ⋮
Discrete least-squares approximations over optimized downward closed polynomial spaces in arbitrary dimension ⋮
Learning performance of regularized moving least square regression ⋮
Stable splittings of Hilbert spaces of functions of infinitely many variables ⋮
Online regularized learning with pairwise loss functions ⋮
Online regression with unbounded sampling ⋮
Multivariate weighted Kantorovich operators ⋮
Operator-theoretic framework for forecasting nonlinear time series with kernel analog techniques ⋮
Normal estimation on manifolds by gradient learning ⋮
Binary separation and training support vector machines ⋮
Approximations of conditional probability density functions in Lebesgue spaces via mixture of experts models ⋮
On Gaussian kernels on Hilbert spaces and kernels on hyperbolic spaces ⋮
Least-squares regularized regression with dependent samples andq-penalty ⋮
ERM learning algorithm for multi-class classification ⋮
On grouping effect of elastic net ⋮
THE COEFFICIENT REGULARIZED REGRESSION WITH RANDOM PROJECTION ⋮
Divergence-free quasi-interpolation ⋮
Error bounds of multi-graph regularized semi-supervised classification ⋮
Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories ⋮
ℓ1-Norm support vector machine for ranking with exponentially strongly mixing sequence ⋮
Fast rates of minimum error entropy with heavy-tailed noise ⋮
Learning rate of distribution regression with dependent samples ⋮
Distributed kernel gradient descent algorithm for minimum error entropy principle ⋮
Kernel-based sparse regression with the correntropy-induced loss ⋮
A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval ⋮
Distributed semi-supervised regression learning with coefficient regularization ⋮
Echo state networks are universal ⋮
Distributed learning with multi-penalty regularization ⋮
On reproducing kernel Banach spaces: generic definitions and unified framework of constructions ⋮
Multivariate integration for analytic functions with Gaussian kernels ⋮
Deep CNNs as universal predictors of elasticity tensors in homogenization ⋮
Kolmogorov widths on the sphere via eigenvalue estimates for Hölderian integral operators ⋮
On the universal transformation of data-driven models to control systems ⋮
The convergence rate of semi-supervised regression with quadratic loss ⋮
Tractability of Function Approximation with Product Kernels ⋮
Topology, convergence, and reconstruction of predictive states ⋮
Regression learning with non-identically and non-independently sampling ⋮
Random sampling in reproducing kernel subspaces of \(L^p(\mathbb{R}^n)\) ⋮
Optimal classification of Gaussian processes in homo- and heteroscedastic settings ⋮
Local RBF-based penalized least-squares approximation on the sphere with noisy scattered data ⋮
Partial multi-dividing ontology learning algorithm ⋮
Convergence analysis of online learning algorithm with two-stage step size ⋮
The performance of semi-supervised Laplacian regularized regression with the least square loss ⋮
Fast learning from \(\alpha\)-mixing observations ⋮
Convolution random sampling in multiply generated shift-invariant spaces of \(L^p(\mathbb{R}^d)\) ⋮
Random sampling in multiply generated shift-invariant subspaces of mixed Lebesgue spaces \(L^{p,q}(\mathbb{R}\times\mathbb{R}^d)\) ⋮
Kernel conjugate gradient methods with random projections ⋮
Optimal learning with anisotropic Gaussian SVMs ⋮
Reproducing kernels and choices of associated feature spaces, in the form of \(L^2\)-spaces ⋮
Learning performance of regularized regression with multiscale kernels based on Markov observations ⋮
A closer look at covering number bounds for Gaussian kernels ⋮
Multi-task learning in vector-valued reproducing kernel Banach spaces with the \(\ell^1\) norm ⋮
Convergence rates of learning algorithms by random projection ⋮
On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization ⋮
Coefficient-based regression with non-identical unbounded sampling ⋮
Mathematics of the neural response ⋮
Asymptotic expansion for neural network operators of the Kantorovich type and high order of approximation ⋮
Rademacher Chaos Complexities for Learning the Kernel Problem ⋮
Optimal learning rates for distribution regression ⋮
Learning rate of magnitude-preserving regularization ranking with dependent samples ⋮
Error analysis of multicategory support vector machine classifiers ⋮
Some new bounds on the entropy numbers of diagonal operators ⋮
Convergence analysis of deterministic kernel-based quadrature rules in misspecified settings ⋮
Learning rates for the risk of kernel-based quantile regression estimators in additive models ⋮
Online pairwise learning algorithms with convex loss functions ⋮
Numerical solution of the parametric diffusion equation by deep neural networks ⋮
Learning with correntropy-induced losses for regression with mixture of symmetric stable noise ⋮
Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces ⋮
Old and New on the Laplace-Beltrami Derivative ⋮
Moving quantile regression ⋮
Analysis of regularized least-squares in reproducing kernel Kreĭn spaces ⋮
A promenade through correct test sequences. I: Degree of constructible sets, Bézout's inequality and density ⋮
A statistical learning assessment of Huber regression ⋮
GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY, ALGORITHMIC STABILITY AND DATA QUALITY ⋮
Variational Monte Carlo -- bridging concepts of machine learning and high-dimensional partial differential equations ⋮
Fast and strong convergence of online learning algorithms ⋮
A direct approach for function approximation on data defined manifolds ⋮
Optimal stochastic Bernstein polynomials in Ditzian-Totik type modulus of smoothness ⋮
\(L_2\)-norm sampling discretization and recovery of functions from RKHS with finite trace ⋮
Universalities of reproducing kernels revisited ⋮
Random sampling and approximation of signals with bounded derivatives ⋮
Superquantiles at work: machine learning applications and efficient subgradient computation ⋮
The learning rates of regularized regression based on reproducing kernel Banach spaces ⋮
Regularized ranking with convex losses and \(\ell^1\)-penalty ⋮
On extension theorems and their connection to universal consistency in machine learning ⋮
Error bounds for learning the kernel ⋮
Generalized Dobrushin ergodicity coefficient and ergodicities of non-homogeneous Markov chains ⋮
A statistical learning perspective on switched linear system identification ⋮
Interpolation, the rudimentary geometry of spaces of Lipschitz functions, and geometric complexity ⋮
Optimal rates for coefficient-based regularized regression ⋮
On the speed of uniform convergence in Mercer's theorem ⋮
INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING ⋮
Stochastic quasi-interpolation with Bernstein polynomials ⋮
Extreme learning machine for ranking: generalization analysis and applications ⋮
Sharp estimates for the covering numbers of the Weierstrass fractal kernel ⋮
Functional linear regression with Huber loss ⋮
ERROR ANALYSIS FOR THE SPARSE GRAPH-BASED SEMI-SUPERVISED CLASSIFICATION ALGORITHM ⋮
KERNEL METHODS FOR INDEPENDENCE MEASUREMENT WITH COEFFICIENT CONSTRAINTS ⋮
CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION ⋮
Stochastic subspace correction in Hilbert space ⋮
Optimal sampling points in reproducing kernel Hilbert spaces ⋮
Convergence rate for the moving least-squares learning with dependent sampling ⋮
Error analysis for \(l^q\)-coefficient regularized moving least-square regression ⋮
Error analysis on Hérmite learning with gradient data ⋮
A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model ⋮
Statistical consistency of coefficient-based conditional quantile regression ⋮
Nonparametric regression using needlet kernels for spherical data ⋮
Multi-penalty regularization in learning theory ⋮
Generalization properties of doubly stochastic learning algorithms ⋮
Geometry on probability spaces ⋮
Regularization in kernel learning ⋮
Hermite learning with gradient data ⋮
Regularized least square regression with dependent samples ⋮
On the robustness of regularized pairwise learning methods based on kernels ⋮
Kernel-based conditional canonical correlation analysis via modified Tikhonov regularization ⋮
Multi-kernel regularized classifiers ⋮
An efficient kernel learning algorithm for semisupervised regression problems ⋮
ERM scheme for quantile regression ⋮
Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels ⋮
Integral operator approach to learning theory with unbounded sampling ⋮
An oracle inequality for regularized risk minimizers with strongly mixing observations ⋮
Piecewise linear approximation methods with stochastic sampling sites ⋮
Radial basis function approximation of noisy scattered data on the sphere ⋮
Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels ⋮
Generalization errors of Laplacian regularized least squares regression ⋮
Learning gradients via an early stopping gradient descent method ⋮
Generalization bounds of ERM algorithm with Markov chain samples ⋮
Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations ⋮
Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points ⋮
Learning theory approach to a system identification problem involving atomic norm ⋮
On the convergence rate of kernel-based sequential greedy regression ⋮
Approximation analysis of learning algorithms for support vector regression and quantile regression ⋮
On the regularized Laplacian eigenmaps ⋮
Laplacian twin support vector machine for semi-supervised classification ⋮
ERM learning with unbounded sampling ⋮
Error analysis for coefficient-based regularized regression in additive models ⋮
Sampling scattered data with Bernstein polynomials: stochastic and deterministic error estimates ⋮
Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains ⋮
Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs ⋮
Consistency of support vector machines using additive kernels for additive models ⋮
Multivariate approximation for analytic functions with Gaussian kernels ⋮
Estimating conditional quantiles with the help of the pinball loss ⋮
Primal and dual model representations in kernel-based learning ⋮
On the empirical estimation of integral probability metrics ⋮
Gauss-Hermite quadratures for functions from Hilbert spaces with Gaussian reproducing kernels ⋮
Learning sparse gradients for variable selection and dimension reduction ⋮
Estimation of convergence rate for multi-regression learning algorithm ⋮
Semi-supervised learning with the help of Parzen windows ⋮
Dynamical memory control based on projection technique for online regression ⋮
Conditional quantiles with varying Gaussians ⋮
Online learning for quantile regression and support vector regression ⋮
Adaptive kernel methods using the balancing principle ⋮
The generalization performance of ERM algorithm with strongly mixing observations ⋮
Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮
Convergence rate of the semi-supervised greedy algorithm ⋮
Generalization ability of fractional polynomial models ⋮
Unified approach to coefficient-based regularized regression ⋮
Classification with non-i.i.d. sampling ⋮
Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity ⋮
Indefinite kernel network with \(l^q\)-norm regularization ⋮
Learning rate of support vector machine for ranking ⋮
Convergence rate of kernel canonical correlation analysis ⋮
Applied harmonic analysis and data processing. Abstracts from the workshop held March 25--31, 2018 ⋮
Introduction to the peptide binding problem of computational immunology: new results ⋮
Generalization performance of bipartite ranking algorithms with convex losses ⋮
Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels ⋮
Robust pairwise learning with Huber loss ⋮
Unregularized online learning algorithms with general loss functions ⋮
Perturbation of convex risk minimization and its application in differential private learning algorithms ⋮
Statistical performance of optimal scoring in reproducing kernel Hilbert spaces ⋮
Regularized kernel-based reconstruction in generalized Besov spaces ⋮
A numerical algorithm for zero counting. I: Complexity and accuracy ⋮
Learning and approximation by Gaussians on Riemannian manifolds ⋮
The convergence rate for a \(K\)-functional in learning theory ⋮
Support vector machines regression with \(l^1\)-regularizer ⋮
Consistency of regularized spectral clustering ⋮
Logistic classification with varying gaussians ⋮
Learning from non-identical sampling for classification ⋮
Moving least-square method in learning theory ⋮
Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions ⋮
A Kernel Multiple Change-point Algorithm via Model Selection ⋮
Learning rates of multi-kernel regularized regression ⋮
Learning errors of linear programming support vector regression ⋮
Covering numbers of Gaussian reproducing kernel Hilbert spaces ⋮
Mercer theorem for RKHS on noncompact sets ⋮
Concentration estimates for the moving least-square method in learning theory ⋮
Semi-supervised learning based on high density region estimation ⋮
Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling ⋮
Statistical analysis of the moving least-squares method with unbounded sampling ⋮
A sparse grid based method for generative dimensionality reduction of high-dimensional data ⋮
A note on application of integral operator in learning theory ⋮
Entropy and sampling numbers of classes of ridge functions ⋮
Learning from uniformly ergodic Markov chains ⋮
Debiased magnitude-preserving ranking: learning rate and bias characterization ⋮
Learning under \((1 + \epsilon)\)-moment conditions ⋮
Learning rates of least-square regularized regression with polynomial kernels ⋮
Estimates of the norm of the Mercer kernel matrices with discrete orthogonal transforms ⋮
Generalization performance of graph-based semi-supervised classification ⋮
SVD revisited: a new variational principle, compatible feature maps and nonlinear extensions ⋮
Capacity dependent analysis for functional online learning algorithms ⋮
Theory of deep convolutional neural networks. III: Approximating radial functions ⋮
Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation ⋮
Random sampling of signals concentrated on compact set in localized reproducing kernel subspace of \(L^p (\mathbb{R}^n)\) ⋮
Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping ⋮
New Hilbert space tools for analysis of graph Laplacians and Markov processes ⋮
Robust partially linear trend filtering for regression estimation and structure discovery ⋮
Random average sampling in a reproducing kernel subspace of mixed Lebesgue space \(L^{p,q}(\mathbb{R}^{n+1})\) ⋮
Design of semi-tensor product-based kernel function for SVM nonlinear classification ⋮
Learning sparse and smooth functions by deep sigmoid nets ⋮
Rate of convergence of Stancu type modified \(q\)-Gamma operators for functions with derivatives of bounded variation ⋮
Error analysis of kernel regularized pairwise learning with a strongly convex loss ⋮
High-probability generalization bounds for pointwise uniformly stable algorithms ⋮
Random sampling and reconstruction in reproducing kernel subspace of mixed Lebesgue spaces ⋮
Learning rates of multitask kernel methods ⋮
On Szász-Durrmeyer type modification using Gould Hopper polynomials ⋮
Deep learning theory of distribution regression with CNNs ⋮
Learning performance of uncentered kernel-based principal component analysis ⋮
Coefficient-based regularized distribution regression ⋮
Online regularized learning algorithm for functional data ⋮
Random Sampling of Mellin Band-Limited Signals ⋮
Identifiability of interaction kernels in mean-field equations of interacting particles ⋮
Unsupervised learning of observation functions in state space models by nonparametric moment methods ⋮
Expected integration approximation under general equal measure partition ⋮
Error analysis of the moving least-squares method with non-identical sampling ⋮
Convergence bounds for empirical nonlinear least-squares ⋮
Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations ⋮
Some first results on the consistency of spatial regression with partial differential equation regularization ⋮
Learning theory of minimum error entropy under weak moment conditions ⋮
LOCAL LEARNING ESTIMATES BY INTEGRAL OPERATORS ⋮
Learning by atomic norm regularization with polynomial kernels ⋮
A Statistical Learning Approach to Modal Regression ⋮
The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary ⋮
Distributed spectral pairwise ranking algorithms ⋮
Online regularized pairwise learning with non-i.i.d. observations ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Gradient descent for robust kernel-based regression ⋮
Operator-valued positive definite kernels and differentiable universality ⋮
Learning with Boundary Conditions ⋮
Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection ⋮
Weighted random sampling and reconstruction in general multivariate trigonometric polynomial spaces ⋮
Learning theory of distributed spectral algorithms ⋮
Consistency of learning algorithms using Attouch–Wets convergence ⋮
Regularized learning schemes in feature Banach spaces ⋮
Scenario Approach for Minmax Optimization with Emphasis on the Nonconvex Case: Positive Results and Caveats ⋮
On the K-functional in learning theory ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Learning rates for regularized least squares ranking algorithm ⋮
A STUDY ON THE ERROR OF DISTRIBUTED ALGORITHMS FOR BIG DATA CLASSIFICATION WITH SVM ⋮
Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem ⋮
Support vector machines regression with unbounded sampling ⋮
A Note on Support Vector Machines with Polynomial Kernels ⋮
Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery ⋮
Online Pairwise Learning Algorithms ⋮
Robust Support Vector Machines for Classification with Nonconvex and Smooth Losses ⋮
Constrained ERM Learning of Canonical Correlation Analysis: A Least Squares Perspective ⋮
Learning Rates for Classification with Gaussian Kernels ⋮
Error bounds for approximations with deep ReLU neural networks in Ws,p norms ⋮
Random sampling and reconstruction in multiply generated shift-invariant spaces ⋮
On probabilistic convergence rates of stochastic Bernstein polynomials ⋮
Some Numerical Test on the Convergence Rates of Regression with Differential Regularization ⋮
Reproducing Properties of Differentiable Mercer-Like Kernels on the Sphere ⋮
Multivariate Monte Carlo Approximation Based on Scattered Data ⋮
Coefficient-based regularization network with variance loss for error ⋮
Chebyshev type inequality for stochastic Bernstein polynomials ⋮
Robust kernel-based distribution regression ⋮
Reproducing Properties of Holomorphic Kernels on Balls of ℂq ⋮
Hermite-Birkhoff Interpolation on Arbitrarily Distributed Data in Banach Spaces ⋮
Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Sampling and Stability ⋮
REGULARIZED LEAST SQUARE REGRESSION WITH SPHERICAL POLYNOMIAL KERNELS ⋮
LEARNING RATES OF REGULARIZED REGRESSION FOR FUNCTIONAL DATA ⋮
Learning Theory of Randomized Sparse Kaczmarz Method ⋮
Simultaneous estimations of optimal directions and optimal transformations for functional data ⋮
On the convergence rate and some applications of regularized ranking algorithms ⋮
Regularized modal regression with data-dependent hypothesis spaces ⋮
Randomized multi-scale kernels learning with sparsity constraint regularization for regression ⋮
A spectral series approach to high-dimensional nonparametric regression ⋮
ONLINE LEARNING WITH MARKOV SAMPLING ⋮
Nyström subsampling method for coefficient-based regularized regression ⋮
Online regularized pairwise learning with least squares loss ⋮
Performance analysis of the LapRSSLG algorithm in learning theory ⋮
Analysis of Regression Algorithms with Unbounded Sampling ⋮
Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning ⋮
ANALYSIS OF CLASSIFICATION WITH A REJECT OPTION ⋮
Error Estimates for Multivariate Regression on Discretized Function Spaces ⋮
SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS ⋮
Deep neural networks for rotation-invariance approximation and learning ⋮
Semi-supervised learning with summary statistics ⋮
Distributed learning with indefinite kernels ⋮
Optimal Rates for Multi-pass Stochastic Gradient Methods ⋮
VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES AND UNIVERSALITY ⋮
Sparse additive machine with ramp loss ⋮
Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression ⋮
Multikernel Regression with Sparsity Constraint ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Unnamed Item ⋮
Estimates of learning rates of regularized regression via polyline functions ⋮
New Insights Into Learning With Correntropy-Based Regression ⋮
A Framework of Learning Through Empirical Gain Maximization ⋮
Distributed Filtered Hyperinterpolation for Noisy Data on the Sphere ⋮
Optimal learning with Gaussians and correntropy loss ⋮
Unnamed Item ⋮
Unnamed Item ⋮
REPRODUCING KERNEL HILBERT SPACES OF FRACTAL INTERPOLATION FUNCTIONS FOR CURVE FITTING PROBLEMS ⋮
Error analysis of the kernel regularized regression based on refined convex losses and RKBSs ⋮
Thresholded spectral algorithms for sparse approximations ⋮
Approximating functions with multi-features by deep convolutional neural networks ⋮
Approximations of non-homogeneous Markov chains on abstract states spaces ⋮
Regularization: From Inverse Problems to Large-Scale Machine Learning ⋮
Learning Interaction Kernels in Mean-Field Equations of First-Order Systems of Interacting Particles ⋮
Analysis of k-partite ranking algorithm in area under the receiver operating characteristic curve criterion ⋮
Error analysis of the moving least-squares regression learning algorithm with β-mixing and non-identical sampling