A new analytical approach to consistency and overfitting in regularized empirical risk minimization
From MaRDI portal
Publication:3133606
Abstract: This work considers the problem of binary classification: given training data from a certain population, together with associated labels , determine the best label for an element not among the training data. More specifically, this work considers a variant of the regularized empirical risk functional which is defined intrinsically to the observed data and does not depend on the underlying population. Tools from modern analysis are used to obtain a concise proof of asymptotic consistency as regularization parameters are taken to zero at rates related to the size of the sample. These analytical tools give a new framework for understanding overfitting and underfitting, and rigorously connect the notion of overfitting with a loss of compactness.
Recommendations
- Efficiency of classification methods based on empirical risk minimization
- Error analysis on regularized learning
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- Iterative regularization for learning with convex loss functions
- Consistency analysis of spectral regularization algorithms
Cites work
- scientific article; zbMATH DE number 5988004 (Why is no real title available?)
- scientific article; zbMATH DE number 4138299 (Why is no real title available?)
- scientific article; zbMATH DE number 44825 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 1909499 (Why is no real title available?)
- scientific article; zbMATH DE number 1448982 (Why is no real title available?)
- scientific article; zbMATH DE number 3320765 (Why is no real title available?)
- A first course in Sobolev spaces
- A variational approach to remove outliers and impulse noise
- An introduction to -convergence
- Consistency of Cheeger and ratio graph cuts
- Continuum limit of total variation on point clouds
- Convergence rates of posterior distributions.
- Gradient flows in metric spaces and in the space of probability measures
- Hitchhiker's guide to the fractional Sobolev spaces
- Iterative Methods for Total Variation Denoising
- Modern methods in the calculus of variations. \(L^p\) spaces
- On the Rate of Convergence of Empirical Measures in ∞-transportation Distance
- Parametrized measures and variational principles
- Posterior contraction rates for the Bayesian approach to linear ill-posed inverse problems
- Statistical learning theory: models, concepts, and results
- Theoretical foundations and numerical methods for sparse recovery. Papers based on the presentations of the summer school ``Theoretical foundations and numerical methods for sparse recovery, Vienna, Austria, August 31 -- September 4, 2009.
Cited in
(11)- Introduction: Big data and partial differential equations
- Consistency of fractional graph-Laplacian regularization in semisupervised learning with finite labels
- Analysis of \(p\)-Laplacian regularization in semisupervised learning
- A maximum principle argument for the uniform convergence of graph Laplacian regressors
- Variational limits of \(k\)-NN graph-based functionals on data clouds
- Eikonal depth: an optimal control approach to statistical depths
- scientific article; zbMATH DE number 7370580 (Why is no real title available?)
- Mumford-Shah functionals on graphs and their asymptotics
- On the consistency of graph-based Bayesian semi-supervised learning and the scalability of sampling algorithms
- Large data limit for a phase transition model with the p-Laplacian on point clouds
- Partial differential equations and variational methods for geometric processing of images
This page was built for publication: A new analytical approach to consistency and overfitting in regularized empirical risk minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3133606)