Regularization networks and support vector machines
From MaRDI portal
Publication:1968640
DOI10.1023/A:1018946025316zbMath0939.68098OpenAlexW2110652811WikidataQ63508138 ScholiaQ63508138MaRDI QIDQ1968640
Massimiliano Pontil, Tomaso Poggio, Theodoros Evgeniou
Publication date: 21 March 2000
Published in: Advances in Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1023/a:1018946025316
regularizationradial basis functionssupport vector machinesreproducing kernel Hilbert spacestructural risk minimization
Related Items
The covering number in learning theory, Optimal sampling points in reproducing kernel Hilbert spaces, Another look at statistical learning theory and regularization, Statistical properties of the method of regularization with periodic Gaussian reproducing kernel, Generalization performance of Lagrangian support vector machine based on Markov sampling, Influence diagnostics in support vector machines, On the stability of reproducing kernel Hilbert spaces of discrete-time impulse responses, A new kernel-based approach to hybrid system identification, Principal manifold learning by sparse grids, Multi-penalty regularization in learning theory, On kernel design for regularized LTI system identification, The generalized cross validation filter, Regularized least square regression with dependent samples, Distributed regression learning with coefficient regularization, Optimal learning rates for kernel partial least squares, Terminated Ramp--Support Vector machines: A nonparametric data dependent kernel, Distributed parametric and nonparametric regression with on-line performance bounds computation, On regularization algorithms in learning theory, Multi-kernel regularized classifiers, A computationally efficient scheme for feature extraction with kernel discriminant analysis, Regularized least square regression with unbounded and dependent sampling, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Integral operator approach to learning theory with unbounded sampling, Learning rates for least square regressions with coefficient regularization, Least squares regression with \(l_1\)-regularizer in sum space, Least square regression with indefinite kernels and coefficient regularization, Efficient spatio-temporal Gaussian regression via Kalman filtering, Tuning complexity in regularized kernel-based regression and linear system identification: the robustness of the marginal likelihood estimator, Gaussian processes for object categorization, Learning rates for the kernel regularized regression with a differentiable strongly convex loss, Approximation of Lyapunov functions from noisy data, Full-body person recognition system., On complex-valued 2D eikonals. IV: continuation past a caustic, A meta-learning approach to the regularized learning -- case study: blood glucose prediction, Just interpolate: kernel ``ridgeless regression can generalize, Laplacian twin support vector machine for semi-supervised classification, Parallel algorithm for training multiclass proximal support vector machines, Learning theory viewpoint of approximation by positive linear operators, Consistency analysis of spectral regularization algorithms, Explicit connections between longitudinal data analysis and kernel machines, Improving the solution of least squares support vector machines with application to a blast furnace system, Semi-supervised learning with the help of Parzen windows, Conditional quantiles with varying Gaussians, Comment on: ``Support vector machines with applications, Knowledge-based Green's kernel for support vector regression, Adaptive kernel methods using the balancing principle, A note on different covering numbers in learning theory., Kernel methods in system identification, machine learning and function estimation: a survey, Calibration of \(\epsilon\)-insensitive loss in support vector machines regression, Generalization performance of least-square regularized regression algorithm with Markov chain samples, Mini-workshop: Deep learning and inverse problems. Abstracts from the mini-workshop held March 4--10, 2018, A unifying representer theorem for inverse problems and machine learning, Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels, Derivative reproducing properties for kernel methods in learning theory, Learning performance of regularized regression with multiscale kernels based on Markov observations, Orthogonality from disjoint support in reproducing kernel Hilbert spaces, Consistent learning by composite proximal thresholding, Discriminatively regularized least-squares classification, Parzen windows for multi-class classification, Learning rates for regularized classifiers using multivariate polynomial kernels, Efficient regularized least-squares algorithms for conditional ranking on relational data, Learning and approximation by Gaussians on Riemannian manifolds, Double-fold localized multiple matrixized learning machine, Convergence rates of learning algorithms by random projection, The convergence rate for a \(K\)-functional in learning theory, Regularized learning in Banach spaces as an optimization problem: representer theorems, Support vector machines regression with \(l^1\)-regularizer, Logistic classification with varying gaussians, Generalized regularized least-squares approximation of noisy data with application to stochastic PDEs, Learning from non-identical sampling for classification, Moving least-square method in learning theory, Regularized vector field learning with sparse approximation for mismatch removal, Optimization problems in statistical learning: duality and optimality conditions, Positive definite dot product kernels in learning theory, Fast computation of smoothing splines subject to equality constraints, Spatially adaptive sparse grids for high-dimensional data-driven problems, Mercer theorem for RKHS on noncompact sets, Concentration estimates for the moving least-square method in learning theory, Approximation of high-dimensional kernel matrices by multilevel circulant matrices, System identification using kernel-based regularization: new insights on stability and consistency issues, Incorporating prior knowledge in support vector regression, Nonparallel plane proximal classifier, Robust generalized eigenvalue classifier with ellipsoidal uncertainty, A note on application of integral operator in learning theory, Data-driven estimation in equilibrium using inverse optimization, Kernel logistic PLS: a tool for supervised nonlinear dimensionality reduction and binary classifi\-cation, High-dimensional pseudo-logistic regression and classification with applications to gene expression data, SVM-boosting based on Markov resampling: theory and algorithm, Analysis of support vector machines regression, Scattered data reconstruction by regularization in B-spline and associated wavelet spaces, An approach for constructing complex discriminating surfaces based on Bayesian interference of the maximum entropy, On a regularization of unsupervised domain adaptation in RKHS, Robustness by reweighting for kernel estimators: an overview, Learning rates of least-square regularized regression with polynomial kernels, High order Parzen windows and randomized sampling, Distribution-free consistency of empirical risk minimization and support vector regression, Learning a function from noisy samples at a finite sparse set of points, Robustness of reweighted least squares kernel based regression, Bayesian frequentist bounds for machine learning and system identification, Similarity, kernels, and the fundamental constraints on cognition, Learning performance of regularized moving least square regression, Binary separation and training support vector machines, Complexity control in statistical learning, Fully online classification by regularization, Estimates of covering numbers of convex sets with slowly decaying orthogonal subsets, Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications, Generalized semi-inner products with applications to regularized learning, Shannon sampling and function reconstruction from point values, On Different Facets of Regularization Theory, Generalized System Identification with Stable Spline Kernels, Forecasting stock market movement direction with support vector machine, Unnamed Item, Learning minimum variance discrete hedging directly from the market, Leave-One-Out Bounds for Kernel Methods, Learning with sample dependent hypothesis spaces, The optimal solution of multi-kernel regularization learning, A Unified View of Nonparametric Trend-Cycle Predictors Via Reproducing Kernel Hilbert Spaces, Are Loss Functions All the Same?, AN ERROR ANALYSIS OF LAVRENTIEV REGULARIZATION IN LEARNING THEORY, Learning with Boundary Conditions, Kernel-based sparse regression with the correntropy-induced loss, Distributed learning with multi-penalty regularization, Generalization and learning rate of multi-class support vector classification and regression, Kernel Methods for the Approximation of Nonlinear Systems, Regularized minimal-norm solution of an overdetermined system of first kind integral equations, On the K-functional in learning theory, Unnamed Item, Regression learning with non-identically and non-independently sampling, On approximation by reproducing kernel spaces in weighted \(L^p\) spaces, Statistical performance of support vector machines, Learning rates of regularized regression for exponentially strongly mixing sequence, Spectral Algorithms for Supervised Learning, Identification of chimera using machine learning, Learning dynamical systems using local stability priors, Solutions of nonlinear control and estimation problems in reproducing kernel Hilbert spaces: existence and numerical determination, Learning with Convex Loss and Indefinite Kernels, Support vector machines regression with unbounded sampling, Foundations of Support Constraint Machines, A Note on Support Vector Machines with Polynomial Kernels, Convergence analysis of online algorithms, Bayes and empirical Bayes semi-blind deconvolution using eigenfunctions of a prior covariance, Kernel Absolute Summability Is Sufficient but Not Necessary for RKHS Stability, Coefficient-based regularization network with variance loss for error, Model selection approaches for non-linear system identification: a review, Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data, CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY, Unnamed Item, On the rate of convergence for multi-category classification based on convex losses, Classifier learning with a new locality regularization method, ON THE INCLUSION RELATION OF REPRODUCING KERNEL HILBERT SPACES, REGULARIZED LEAST SQUARE ALGORITHM WITH TWO KERNELS, On the mathematical foundations of learning, Bayesian kernel based classification for financial distress detection, Approximation by neural networks and learning theory, The consistency of multicategory support vector machines, Tight frame expansions of multiscale reproducing kernels in Sobolev spaces, Dual bases and discrete reproducing kernels: a unified framework for RBF and MLS approximation, SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming, Approximation with polynomial kernels and SVM classifiers, Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization, REGULARIZED LEAST SQUARE REGRESSION WITH SPHERICAL POLYNOMIAL KERNELS, LEARNING RATES OF REGULARIZED REGRESSION FOR FUNCTIONAL DATA, A novel multi-view learning developed from single-view patterns, Supervised Learning by Support Vector Machines, Balancing principle in supervised learning for a general regularization scheme, Estimating Interest Rate Curves by Support Vector Regression, Least Square Regression with lp-Coefficient Regularization, Support vector machines: a nonlinear modelling and control perspective, ONLINE LEARNING WITH MARKOV SAMPLING, Theory of Classification: a Survey of Some Recent Advances, DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION, GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY, ALGORITHMIC STABILITY AND DATA QUALITY, An Algorithm for Unconstrained Quadratically Penalized Convex Optimization, Universalities of reproducing kernels revisited, SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS, A NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORY, Analysis of Convertible Bond Value Based on Integration of Support Vector Machine and Copula Function, Density problem and approximation error in learning theory, HIERARCHICAL SPARSE METHOD WITH APPLICATIONS IN VISION AND SPEECH RECOGNITION, VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES AND UNIVERSALITY, ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY, Online regularized generalized gradient classification algorithms, Nonlinear system identification via data augmentation, Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression, Multikernel Regression with Sparsity Constraint, Boosting as a kernel-based method, Optimal rate for support vector machine regression with Markov chain samples, OR Practice–Data Analytics for Optimal Detection of Metastatic Prostate Cancer, INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING, Half supervised coefficient regularization for regression learning with unbounded sampling, Supervised pre-clustering for sparse regression, Approximate interpolation with applications to selecting smoothing parameters, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Shannon sampling. II: Connections to learning theory, Quadratic Convergence of Smoothing Newton's Method for 0/1 Loss Optimization, Error analysis of the kernel regularized regression based on refined convex losses and RKBSs, Laplacian pair-weight vector projection for semi-supervised learning, Learning system parameters from Turing patterns, Linear inverse problems with Hessian-Schatten total variation, Learning rates of multitask kernel methods, Hierarchical regularization networks for sparsification based learning on noisy datasets, A duality approach to regularized learning problems in Banach spaces, Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation, Margin Error Bounds for Support Vector Machines on Reproducing Kernel Banach Spaces, On Learning Vector-Valued Functions, System identification techniques based on support vector machines without bias term, Multicategory proximal support vector machine classifiers, Multicategory proximal support vector machine classifiers, Soft and hard classification by reproducing kernel Hilbert space methods, An Invariance Property of Predictors in Kernel-Induced Hypothesis Spaces, Online Classification with Varying Gaussians, Minimum norm interpolation in the ℓ1(ℕ) space