Correction of AI systems by linear discriminants: probabilistic foundations
DOI10.1016/J.INS.2018.07.040zbMATH Open1441.68201DBLPjournals/isci/GorbanGGMT18arXiv1811.05321OpenAlexW2884716758WikidataQ57384820 ScholiaQ57384820MaRDI QIDQ2200569FDOQ2200569
Publication date: 22 September 2020
Published in: Information Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.05321
big dataerror correctionmeasure concentrationblessing of dimensionalitylinear discriminantnon-iterative learning
Probability distributions: general theory (60E05) Learning and adaptive systems in artificial intelligence (68T05) Characterization and structure theory for multivariate probability distributions; copulas (62H05) Computational aspects of data analysis and big data (68T09) Geometric probability and stochastic geometry (60D05) Inequalities; stochastic orderings (60E15) General topics in artificial intelligence (68T01)
Cites Work
- Universal approximation bounds for superpositions of a sigmoidal function
- Title not available (Why is that?)
- Title not available (Why is that?)
- On the mathematical foundations of learning
- Title not available (Why is that?)
- Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing
- Small ball probability estimates for log-concave measures
- Title not available (Why is that?)
- Isoperimetric and analytic inequalities for log-concave probability measures
- Concentration of measure and isoperimetric inequalities in product spaces
- Approximation of the Sphere by Polytopes having Few Vertices
- From Brunn-Minkowski to Brascamp-Lieb and to logarithmic Sobolev inequalities
- Convexity
- Title not available (Why is that?)
- Training a Support Vector Machine in the Primal
- The geometry of logconcave functions and sampling algorithms
- Interpolating thin-shell and sharp large-deviation estimates for isotropic log-concave measures
- On the shape of the convex hull of random points
- Is the \(k\)-NN classifier in high dimensions affected by the curse of dimensionality?
- Concentration property on probability spaces.
- Approximation with random bases: pro et contra
- Probabilistic lower bounds for approximation by shallow perceptron networks
- On the Geometry of Log-Concave Probability Measures with Bounded Log-Sobolev Constant
- Quasiorthogonal dimension of Euclidean spaces
- Blessing of dimensionality: mathematical foundations of the statistical physics of data
- Stochastic separation theorems
- Title not available (Why is that?)
Cited In (9)
- Fast construction of correcting ensembles for legacy artificial intelligence systems: algorithms and a case study
- One-trial correction of legacy AI systems and stochastic separation theorems
- General stochastic separation theorems with optimal bounds
- Coping with AI errors with provable guarantees
- Blessing of dimensionality at the edge and geometry of few-shot learning
- Scrutinizing XAI using linear ground-truth data with suppressor variables
- Approximation of classifiers by deep perceptron networks
- Title not available (Why is that?)
- Machine learning approach to the Floquet-Lindbladian problem
Uses Software
Recommendations
- Fast construction of correcting ensembles for legacy artificial intelligence systems: algorithms and a case study π π
- One-trial correction of legacy AI systems and stochastic separation theorems π π
- Stochastic separation theorems π π
- General stochastic separation theorems with optimal bounds π π
- Blessing of dimensionality: mathematical foundations of the statistical physics of data π π
This page was built for publication: Correction of AI systems by linear discriminants: probabilistic foundations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2200569)