Learning rates of least-square regularized regression
From MaRDI portal
Publication:2505653
DOI10.1007/s10208-004-0155-9zbMath1100.68100OpenAlexW2017332415WikidataQ58759029 ScholiaQ58759029MaRDI QIDQ2505653
Yiming Ying, Qiang Wu, Ding-Xuan Zhou
Publication date: 28 September 2006
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10208-004-0155-9
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (only showing first 100 items - show all)
Machine learning with kernels for portfolio valuation and risk management ⋮ Convergence rate for the moving least-squares learning with dependent sampling ⋮ Error analysis for \(l^q\)-coefficient regularized moving least-square regression ⋮ A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model ⋮ Generalization performance of Lagrangian support vector machine based on Markov sampling ⋮ Statistical consistency of coefficient-based conditional quantile regression ⋮ Learning by atomic norm regularization with polynomial kernels ⋮ Least-squares regularized regression with dependent samples andq-penalty ⋮ Regularization in kernel learning ⋮ ERM learning algorithm for multi-class classification ⋮ Regularized least square regression with dependent samples ⋮ THE COEFFICIENT REGULARIZED REGRESSION WITH RANDOM PROJECTION ⋮ Optimal learning rates for kernel partial least squares ⋮ Learning with sample dependent hypothesis spaces ⋮ Application of integral operator for regularized least-square regression ⋮ The optimal solution of multi-kernel regularization learning ⋮ The consistency of least-square regularized regression with negative association sequence ⋮ An efficient kernel learning algorithm for semisupervised regression problems ⋮ Learning rates of regularized regression on the unit sphere ⋮ Regularized least square regression with unbounded and dependent sampling ⋮ Integral operator approach to learning theory with unbounded sampling ⋮ Deterministic error bounds for kernel-based learning techniques under bounded noise ⋮ Learning with coefficient-based regularization and \(\ell^1\)-penalty ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Learning rates for least square regressions with coefficient regularization ⋮ Least squares regression with \(l_1\)-regularizer in sum space ⋮ Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels ⋮ Optimal learning rates for least squares regularized regression with unbounded sampling ⋮ Least square regression with indefinite kernels and coefficient regularization ⋮ Generalization errors of Laplacian regularized least squares regression ⋮ Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations ⋮ Learning theory approach to a system identification problem involving atomic norm ⋮ Regularized learning schemes in feature Banach spaces ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Approximation analysis of learning algorithms for support vector regression and quantile regression ⋮ Regression learning with non-identically and non-independently sampling ⋮ Learning rates of regularized regression for exponentially strongly mixing sequence ⋮ On the regularized Laplacian eigenmaps ⋮ Spectral Algorithms for Supervised Learning ⋮ ERM learning with unbounded sampling ⋮ Concentration estimates for learning with unbounded sampling ⋮ Nonasymptotic analysis of robust regression with modified Huber's loss ⋮ Consistency analysis of spectral regularization algorithms ⋮ Optimal regression rates for SVMs using Gaussian kernels ⋮ Estimation of convergence rate for multi-regression learning algorithm ⋮ Error bounds for \(l^p\)-norm multiple kernel learning with least square loss ⋮ Adaptive kernel methods using the balancing principle ⋮ Unified approach to coefficient-based regularized regression ⋮ Kernel methods in system identification, machine learning and function estimation: a survey ⋮ Learning with varying insensitive loss ⋮ Indefinite kernel network with \(l^q\)-norm regularization ⋮ Constructive analysis for least squares regression with generalized \(K\)-norm regularization ⋮ Learning rate of support vector machine for ranking ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Generalization performance of least-square regularized regression algorithm with Markov chain samples ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Learning rates for kernel-based expectile regression ⋮ Optimal rate of the regularized regression learning algorithm ⋮ Statistical performance of optimal scoring in reproducing kernel Hilbert spaces ⋮ Learning performance of regularized regression with multiscale kernels based on Markov observations ⋮ Learning rates for regularized classifiers using multivariate polynomial kernels ⋮ Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions ⋮ Convergence rates of learning algorithms by random projection ⋮ Support vector machines regression with \(l^1\)-regularizer ⋮ Consistency of regularized spectral clustering ⋮ Logistic classification with varying gaussians ⋮ Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces ⋮ Learning rates of multi-kernel regularized regression ⋮ Learning errors of linear programming support vector regression ⋮ Coefficient-based regression with non-identical unbounded sampling ⋮ Least-square regularized regression with non-iid sampling ⋮ Application of integral operator for vector-valued regression learning ⋮ Statistical analysis of the moving least-squares method with unbounded sampling ⋮ Least Square Regression with lp-Coefficient Regularization ⋮ System identification using kernel-based regularization: new insights on stability and consistency issues ⋮ Approximating and learning by Lipschitz kernel on the sphere ⋮ Error analysis of multicategory support vector machine classifiers ⋮ A note on application of integral operator in learning theory ⋮ Learning with correntropy-induced losses for regression with mixture of symmetric stable noise ⋮ Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces ⋮ Moving quantile regression ⋮ Analysis of regularized least-squares in reproducing kernel Kreĭn spaces ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY, ALGORITHMIC STABILITY AND DATA QUALITY ⋮ SVM-boosting based on Markov resampling: theory and algorithm ⋮ Analysis of support vector machines regression ⋮ Random sampling and approximation of signals with bounded derivatives ⋮ Least square regularized regression for multitask learning ⋮ Learning under \((1 + \epsilon)\)-moment conditions ⋮ Learning rates of least-square regularized regression with polynomial kernels ⋮ Estimates of learning rates of regularized regression via polyline functions ⋮ High order Parzen windows and randomized sampling ⋮ Generalization performance of graph-based semi-supervised classification ⋮ Boosting as a kernel-based method ⋮ Half supervised coefficient regularization for regression learning with unbounded sampling ⋮ Generalization performance of Gaussian kernels SVMC based on Markov sampling ⋮ Bayesian frequentist bounds for machine learning and system identification ⋮ ERROR ANALYSIS FOR THE SPARSE GRAPH-BASED SEMI-SUPERVISED CLASSIFICATION ALGORITHM ⋮ CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION
This page was built for publication: Learning rates of least-square regularized regression