Best choices for regularization parameters in learning theory: on the bias-variance problem.
From MaRDI portal
Publication:1865826
DOI10.1007/s102080010030zbMath1057.68085OpenAlexW2149614431WikidataQ57733260 ScholiaQ57733260MaRDI QIDQ1865826
Publication date: 2002
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s102080010030
Learning and adaptive systems in artificial intelligence (68T05) Analysis of variance and covariance (ANOVA) (62J10)
Related Items
Machine learning with kernels for portfolio valuation and risk management, LOCAL LEARNING ESTIMATES BY INTEGRAL OPERATORS, Binary separation and training support vector machines, Error analysis on Hérmite learning with gradient data, Complexity control in statistical learning, Feasibility-based fixed point networks, Nonparametric regression using needlet kernels for spherical data, Nonparametric stochastic approximation with large step-sizes, Regularization in kernel learning, THE COEFFICIENT REGULARIZED REGRESSION WITH RANDOM PROJECTION, A new discrete Cucker-Smale flocking model under hierarchical leadership, Shannon sampling and function reconstruction from point values, Learning with sample dependent hypothesis spaces, Application of integral operator for regularized least-square regression, Wasserstein-Based Projections with Applications to Inverse Problems, The consistency of least-square regularized regression with negative association sequence, Multi-kernel regularized classifiers, Are Loss Functions All the Same?, Hardy variation framework for restoration of weather degraded images, AN ERROR ANALYSIS OF LAVRENTIEV REGULARIZATION IN LEARNING THEORY, A computationally efficient scheme for feature extraction with kernel discriminant analysis, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Modal additive models with data-driven structure identification, Learning rates for least square regressions with coefficient regularization, Least squares regression with \(l_1\)-regularizer in sum space, Fourier frequencies in affine iterated function systems, Generalization and learning rate of multi-class support vector classification and regression, Least square regression with indefinite kernels and coefficient regularization, Generalization bounds of ERM algorithm with Markov chain samples, Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations, Learning rates for the kernel regularized regression with a differentiable strongly convex loss, Approximation of Lyapunov functions from noisy data, The existence and uniqueness of solutions for kernel-based system identification, Convex regularization in statistical inverse learning problems, Just interpolate: kernel ``ridgeless regression can generalize, Learning rates of regularized regression for exponentially strongly mixing sequence, Analysis of convergence performance of neural networks ranking algorithm, Learning from regularized regression algorithms with \(p\)-order Markov chain sampling, Concentration estimates for learning with unbounded sampling, A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization, Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains, Estimation of convergence rate for multi-regression learning algorithm, Bias corrected regularization kernel method in ranking, The generalization performance of ERM algorithm with strongly mixing observations, Support vector machines regression with unbounded sampling, Generalization Analysis of Fredholm Kernel Regularized Classifiers, LQG Online Learning, On the interplay between entropy and robustness of gene regulatory networks, Applications of the Bernstein-Durrmeyer operators in estimating the norm of Mercer kernel matrices, Learning rate of support vector machine for ranking, Compressed classification learning with Markov chain samples, Kernel gradient descent algorithm for information theoretic learning, Generalization performance of least-square regularized regression algorithm with Markov chain samples, Kernel regression, minimax rates and effective dimensionality: Beyond the regular case, Optimal rates for regularization of statistical inverse learning problems, Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data, Learning performance of regularized regression with multiscale kernels based on Markov observations, Learning rates for regularized classifiers using multivariate polynomial kernels, Almost optimal estimates for approximation and learning by radial basis function networks, REGULARIZED LEAST SQUARE ALGORITHM WITH TWO KERNELS, Convergence rates of learning algorithms by random projection, Flocking in noisy environments, Positive definite dot product kernels in learning theory, Additive regularization trade-off: fusion of training and validation levels in kernel methods, Mercer theorem for RKHS on noncompact sets, Learning with generalization capability by kernel methods of bounded complexity, The weight-decay technique in learning from data: an optimization point of view, Approximation with polynomial kernels and SVM classifiers, REGULARIZED LEAST SQUARE REGRESSION WITH SPHERICAL POLYNOMIAL KERNELS, LEARNING RATES OF REGULARIZED REGRESSION FOR FUNCTIONAL DATA, Least Square Regression with lp-Coefficient Regularization, On the mathematics of emergence, Learning rate of magnitude-preserving regularization ranking with dependent samples, Rejoinder, Approximating and learning by Lipschitz kernel on the sphere, Error analysis of multicategory support vector machine classifiers, DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION, Convergence and consistency of ERM algorithm with uniformly ergodic Markov chain samples, Analysis of Regression Algorithms with Unbounded Sampling, GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY, ALGORITHMIC STABILITY AND DATA QUALITY, Flocking with informed agents, SVM-boosting based on Markov resampling: theory and algorithm, Analysis of support vector machines regression, Learning from uniformly ergodic Markov chains, The learning rates of regularized regression based on reproducing kernel Banach spaces, Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression, Estimates of the norm of the Mercer kernel matrices with discrete orthogonal transforms, Optimal rate for support vector machine regression with Markov chain samples, Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices, Generalization performance of Gaussian kernels SVMC based on Markov sampling, Functional linear regression with Huber loss, Shannon sampling. II: Connections to learning theory, Error analysis of the kernel regularized regression based on refined convex losses and RKBSs, Learning Interaction Kernels in Mean-Field Equations of First-Order Systems of Interacting Particles