Bootstrapping the out-of-sample predictions for efficient and accurate cross-validation
From MaRDI portal
Publication:1722724
DOI10.1007/s10994-018-5714-4zbMath1486.62078DBLPjournals/ml/TsamardinosGB18arXiv1708.07180OpenAlexW2963686598WikidataQ58581938 ScholiaQ58581938MaRDI QIDQ1722724
Ioannis Tsamardinos, Giorgos Borboudakis, Elissavet Greasidou
Publication date: 18 February 2019
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1708.07180
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Bootstrap, jackknife and other resampling methods (62F40)
Related Items (2)
Factor augmented artificial neural network vs deep learning for forecasting global liquidity dynamics ⋮ Forward-Backward Selection with Early Dropping
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On the choice and influence of the number of boosting steps for high-dimensional linear Cox-models
- Computational algorithms for double bootstrap confidence intervals
- Estimating the dimension of a model
- Multiple comparisons in induction algorithms
- Support-vector networks
- A bias correction for the minimum error rate in cross-validation
- Correcting the Optimal Resampling‐Based Error Rate by Estimating the Error Rate of Wrapper Algorithms
- THE COMPARISON OF PERCENTAGES IN MATCHED SAMPLES
- Random forests
- A new look at the statistical model identification
This page was built for publication: Bootstrapping the out-of-sample predictions for efficient and accurate cross-validation