Neural Network Learning
From MaRDI portal
Publication:4951814
DOI10.1017/CBO9780511624216zbMath0968.68126MaRDI QIDQ4951814
Bartlett, Peter L., Martin Anthony
Publication date: 9 May 2000
Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20) Pattern recognition, speech recognition (68T10) Research exposition (monographs, survey articles) pertaining to computer science (68-02)
Related Items
Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins., On aggregation for heavy-tailed classes, The shattering dimension of sets of linear functionals., Performance of empirical risk minimization in linear aggregation, On the optimal estimation of probability measures in weak and strong topologies, Distributionally-robust machine learning using locally differentially-private data, Statistical consistency of coefficient-based conditional quantile regression, Optimal aggregation of classifiers in statistical learning., On data classification by iterative linear partitioning, Some connections between learning and optimization, Efficient algorithms for learning functions with bounded variation, Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder), Estimates of covering numbers of convex sets with slowly decaying orthogonal subsets, Geometric properties of the ridge function manifold, Large width nearest prototype classification on general distance spaces, Deep learning for the partially linear Cox model, A framework for statistical clustering with constant time approximation algorithms for \(K\)-median and \(K\)-means clustering, Guest editorial: Learning theory, Multi-class pattern classification using neural networks, A chain rule for the expected suprema of Gaussian processes, Deep learning for constrained utility maximisation, A Boolean measure of similarity, Complexity of hyperconcepts, Quantitative error estimates for a least-squares Monte Carlo algorithm for American option pricing, Neural networks with linear threshold activations: structure and algorithms, Localization of VC classes: beyond local Rademacher complexities, On the generalization error of fixed combinations of classifiers, Simulation-based optimization of Markov decision processes: an empirical process theory approach, Relation between weight size and degree of over-fitting in neural network regression, An axiomatic approach to intrinsic dimension of a dataset, Model selection in nonparametric regression, Dynamic treatment regimes: technical challenges and applications, Robust cutpoints in the logical analysis of numerical data, Optimal social choice functions: a utilitarian view, VE dimension induced by Bayesian networks over the Boolean domain, Generalization error bounds for the logical analysis of data, Concentration estimates for learning with unbounded sampling, Q-learning with censored data, Indexability, concentration, and VC theory, Relative deviation learning bounds and generalization with unbounded loss functions, Statistical estimation of ergodic Markov chain kernel over discrete state space, Transfer bounds for linear feature learning, A theory of learning from different domains, Stability and model selection in \(k\)-means clustering, Nonparametric regression using deep neural networks with ReLU activation function, Discussion of: ``Nonparametric regression using deep neural networks with ReLU activation function, On a method for constructing ensembles of regression models, Using the doubling dimension to analyze the generalization of learning algorithms, On learning multicategory classification with sample queries., Obtaining fast error rates in nonconvex situations, Applied harmonic analysis and data processing. Abstracts from the workshop held March 25--31, 2018, On estimation of surrogate models for multivariate computer experiments, Derivative reproducing properties for kernel methods in learning theory, Fast approximation of betweenness centrality through sampling, Asymptotics for regression models under loss of identifiability, Constrained versions of Sauer's Lemma, Large-width bounds for learning half-spaces on distance spaces, Analysis of a two-layer neural network via displacement convexity, The recovery of ridge functions on the hypercube suffers from the curse of dimensionality, Reducing mechanism design to algorithm design via machine learning, Beam element modelling of vehicle body-in-white applying artificial neural network, Learning rates for multi-kernel linear programming classifiers, Robust extreme learning machine for modeling with unknown noise, Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces, Learning rates of multi-kernel regularized regression, Analysis of the rate of convergence of least squares neural network regression estimates in case of measurement errors, Shape functional optimization with restrictions boosted with machine learning techniques, Monte Carlo algorithms for optimal stopping and statistical learning, Nonparametric nonlinear regression using polynomial and neural approximators: a numerical comparison, Analysis of a multi-category classifier, Robust inference for nonlinear regression models, Derivation and analysis of parallel-in-time neural ordinary differential equations, An empirical study of the complexity and randomness of prediction error sequences, Model selection in utility-maximizing binary prediction, Concentration estimates for the moving least-square method in learning theory, Topological properties of the set of functions generated by neural networks of fixed size, A selective overview of deep learning, Linearized two-layers neural networks in high dimension, Multi-category classifiers and sample width, Logistic regression with weight grouping priors, Estimation of an improved surrogate model in uncertainty quantification by neural networks, Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path, A theory of learning with similarity functions, Parametrized classifiers for optimal EFT sensitivity, Consistent online Gaussian process regression without the sample complexity bottleneck, On deep learning as a remedy for the curse of dimensionality in nonparametric regression, Using a similarity measure for credible classification, On the complexity of binary samples, Partitioning points by parallel planes, Maximal width learning of binary functions, Nonparametric regression with modified ReLU networks, Exact lower bounds for the agnostic probably-approximately-correct (PAC) machine learning model, Mean estimation and regression under heavy-tailed distributions: A survey, Kernel learning at the first level of inference, A probabilistic approach to case-based inference, The computational complexity of densest region detection, Relative expected instantaneous loss bounds, Estimation and approximation bounds for gradient-based reinforcement learning, Robust and resource-efficient identification of two hidden layer neural networks, Constructive approximate interpolation by neural networks, Learning and Convergence of the Normalized Radial Basis Functions Networks, Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations, Stationary Density Estimation of Itô Diffusions Using Deep Learning, Deep learning: a statistical viewpoint, Best Arm Identification for Contaminated Bandits, Model Selection via the VC-Dimension, A Statistical Learning Approach to Modal Regression, Classification with reject option, On grouping effect of elastic net, Fingerprinting Codes and the Price of Approximate Differential Privacy, Unnamed Item, Learning with sample dependent hypothesis spaces, RANDOM NEURAL NETWORK METHODS AND DEEP LEARNING, Submodular Functions: Learnability, Structure, and Optimization, Ten More Years of Error Rate Research, Imaging conductivity from current density magnitude using neural networks*, \(L_{p}\)-norm Sauer-Shelah lemma for margin multi-category classifiers, Classification based on prototypes with spheres of influence, The VC dimension of metric balls under Fréchet and Hausdorff distances, Unnamed Item, Wasserstein generative adversarial uncertainty quantification in physics-informed neural networks, Time-dependent Dirac equation with physics-informed neural networks: computation and properties, Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits, Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations, A remark about a learning risk lower bound, ReLU neural networks of polynomial size for exact maximum flow computation, Approximating Probability Distributions by Using Wasserstein Generative Adversarial Networks, Proximinality and uniformly approximable sets in \(L^p\), Solving Elliptic Problems with Singular Sources Using Singularity Splitting Deep Ritz Method, Towards Lower Bounds on the Depth of ReLU Neural Networks, Friedrichs Learning: Weak Solutions of Partial Differential Equations via Deep Learning, Partial identification in nonseparable binary response models with endogenous regressors, Estimating the clustering coefficient using sample complexity analysis, Arithmetic circuits, structured matrices and (not so) deep learning, Learning sparse and smooth functions by deep sigmoid nets, Unnamed Item, An instance-based algorithm for deciding the bias of a coin, Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class, An unfeasibility view of neural network learning, DPK: Deep Neural Network Approximation of the First Piola-Kirchhoff Stress, A Rate of Convergence of Weak Adversarial Neural Networks for the Second Order Parabolic PDEs, Learning bounds for quantum circuits in the agnostic setting, VC dimensions of group convolutional neural networks, Optimal deep neural networks by maximization of the approximation power, Learning half-spaces on general infinite spaces equipped with a distance function, Rates of convergence in active learning, Learning from non-irreducible Markov chains, Unnamed Item, Unnamed Item, Approximating the covariance ellipsoid, Learning bounds via sample width for classifiers on finite metric spaces, Generalization Bounds for Some Ordinal Regression Algorithms, A Uniform Lower Error Bound for Half-Space Learning, Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem, Error bounds for approximations with deep ReLU neural networks in Ws,p norms, A hybrid classifier based on boxes and nearest neighbors, Deep-Learning Solution to Portfolio Selection with Serially Dependent Returns, Generalization Error in Deep Learning, Core-Sets: Updated Survey, CONVERGENCE OF A LEAST‐SQUARES MONTE CARLO ALGORITHM FOR AMERICAN OPTION PRICING WITH DEPENDENT SAMPLE DATA, PAC-learnability of probabilistic deterministic finite state automata in terms of variation distance, Making the Most of Your Samples, Deep Network Approximation for Smooth Functions, Unnamed Item, Size, Depth and Energy of Threshold Circuits Computing Parity Function., Active Nearest-Neighbor Learning in Metric Spaces, Aspects of discrete mathematics and probability in the theory of machine learning, On the complexity of constrained VC-classes, Approximation by neural networks and learning theory, Unnamed Item, Unnamed Item, Nonasymptotic bounds on the \(L_{2}\) error of neural network regression estimates, Supervised Learning by Support Vector Machines, Rademacher Chaos Complexities for Learning the Kernel Problem, Deep Convolutional Framelets: A General Deep Learning Framework for Inverse Problems, Pseudo-dimension and entropy of manifolds formed by affine-invariant dictionary, A PAC Approach to Application-Specific Algorithm Selection, On combining machine learning with decision making, Adaptive regression estimation with multilayer feedforward neural networks, Measuring the Capacity of Sets of Functions in the Analysis of ERM, Agnostic active learning, Empirical Dynamic Programming, Theory of Classification: a Survey of Some Recent Advances, Stochastic approximation schemes for economic capital and risk margin computations, Unnamed Item, On the VC-dimension and boolean functions with long runs, On the Purity and Entropy of Mixed Gaussian States, Another Look at Distance-Weighted Discrimination, A note on penalized minimum distance estimation in nonparametric regression, Multi-task and Lifelong Learning of Kernels, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Optimal \(L_{1}\) bandwidth selection for variable kernel density estimates, Unnamed Item, Unnamed Item, Learning Finite-Dimensional Coding Schemes with Nonlinear Reconstruction Maps, Deep neural networks can stably solve high-dimensional, noisy, non-linear inverse problems, Neural network approximation and estimation of classifiers with classification boundary in a Barron class, Deep nonparametric regression on approximate manifolds: nonasymptotic error bounds with polynomial prefactors, On Learning and Convergence of RBF Networks in Regression Estimation and Classification, Learning ability of interpolating deep convolutional neural networks, Consistency of maximum likelihood for continuous-space network models. I, Analysis of the rate of convergence of two regression estimates defined by neural features which are easy to implement