Rethinking statistical learning theory: learning using statistical invariants
DOI10.1007/S10994-018-5742-0zbMATH Open1480.62064OpenAlexW2884692446WikidataQ129492950 ScholiaQ129492950MaRDI QIDQ669285FDOQ669285
Authors: Vladimir Vapnik, Rauf Izmailov
Publication date: 15 March 2019
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-018-5742-0
Recommendations
classificationlearning theoryregressionreproducing kernel Hilbert spacesupport vector machineweak convergenceneural networkill-posed problemconditional probabilitykernel functionprivileged informationintelligent teacher
Nonparametric estimation (62G05) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Computational learning theory (68Q32) Knowledge representation (68T30)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- The unreasonable effectiveness of mathematics in the natural sciences. Richard courant lecture in mathematical sciences delivered at New York University, May 11, 1959
- Nonparametric methods for reconstructing probability densities
- Knowledge transfer in SVM and neural networks
- Theorie der Zeichenerkennung
Cited In (7)
- Past, current and future trends and challenges in non-deterministic fracture mechanics: a review
- A new learning paradigm: learning using privileged information
- Parallel learning -- a new framework for machine learning
- Complete statistical theory of learning
- Machine Learning and Invariant Theory
- Learning inductive invariants by sampling from frequency distributions
- Inverse statistical learning
Uses Software
This page was built for publication: Rethinking statistical learning theory: learning using statistical invariants
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q669285)