Efficient agnostic learning of neural networks with bounded fan-in

From MaRDI portal
Revision as of 21:49, 6 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:4336393

DOI10.1109/18.556601zbMath0874.68253OpenAlexW2028461624MaRDI QIDQ4336393

Bartlett, Peter L., Wee Sun Lee, Robert C. Williamson

Publication date: 12 June 1997

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Full work available at URL: https://semanticscholar.org/paper/2f41eedb489db10ce8e9a469931f0d1741c669e4




Related Items (31)

Deep learning: a statistical viewpointLocal Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)Rescaled pure greedy algorithm for Hilbert and Banach spacesBenign overfitting in linear regressionNonlinear function approximation: computing smooth solutions with an adaptive greedy algorithmUnnamed ItemBest subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraintMulti-kernel regularized classifiersNonlinear orthogonal series estimates for random design regressionNonexact oracle inequalities, \(r\)-learnability, and fast ratesDeep nonparametric regression on approximate manifolds: nonasymptotic error bounds with polynomial prefactorsGreedy training algorithms for neural networks and applications to PDEsGeneralization Analysis of Fredholm Kernel Regularized ClassifiersAgnostic Learning from Tolerant Natural ProofsCONVERGENCE OF A LEAST‐SQUARES MONTE CARLO ALGORITHM FOR AMERICAN OPTION PRICING WITH DEPENDENT SAMPLE DATAGradient Descent with Identity Initialization Efficiently Learns Positive-Definite Linear Transformations by Deep Residual NetworksLearning by mirror averagingApproximation and learning by greedy algorithmsPersistene in high-dimensional linear predictor-selection and the virtue of overparametrizationA note on margin-based loss functions in classificationMonte Carlo algorithms for optimal stopping and statistical learningLarge-Margin Classification in Infinite Neural NetworksBoosting the margin: a new explanation for the effectiveness of voting methodsScale-sensitive dimensions and skeleton estimates for classificationGeneral Error Estimates for the Longstaff–Schwartz Least-Squares Monte Carlo AlgorithmThe complexity of model classes, and smoothing noisy dataInequalities for uniform deviations of averages from expectations with applications to nonparametric regressionFunctional aggregation for nonparametric regression.Local greedy approximation for nonlinear regression and neural network training.Boosting with early stopping: convergence and consistencyHardness results for neural network approximation problems




This page was built for publication: Efficient agnostic learning of neural networks with bounded fan-in