Stacked regressions
From MaRDI portal
Publication:1916880
zbMath0849.68104MaRDI QIDQ1916880
Publication date: 14 July 1996
Published in: Machine Learning (Search for Journal in Brave)
Related Items
Interpreting uninterpretable predictors: kernel methods, Shtarkov solutions, and random forests, Stable prediction in high-dimensional linear models, The relative performance of ensemble methods with deep convolutional neural networks for image classification, A clusterwise nonlinear regression algorithm for interval-valued data, On the quantification of model uncertainty: a Bayesian perspective, Sequential double cross-validation for assessment of added predictive ability in high-dimensional omic applications, Random average shifted histograms, Mortality forecasting using stacked regression ensembles, Using stacking to average Bayesian predictive distributions (with discussion), Consensus analysis of multiple classifiers using non-repetitive variables: diagnostic application to microarray gene expression data, Modelling and forecasting based on recursive incomplete pseudoinverse matrices, High-order Fisher's discriminant analysis, Heuristics of instability and stabilization in model selection, A cooperative constructive method for neural networks for pattern recognition, An asymptotically optimal kernel combined classifier, Improved nearest neighbor classifiers by weighting and selection of predictors, Using the Bayesian Shtarkov solution for predictions, Stacked grenander and rearrangement estimators of a discrete distribution, A consistent combined classification rule, A random forest based approach for predicting spreads in the primary catastrophe bond market, Sparse ensembles using weighted combination methods based on linear programming, An improved Afriat-Diewert-Parkan nonparametric production function estimator, Boosting random subspace method, EMG-Based Grasping Force Estimation for Robot Skill Transfer Learning, Characterization of weighted quantile sum regression for highly correlated data in a risk analysis setting, On hybrid classification using model assisted posterior estimates, Dealing with expert bias in collective decision-making, Defining replicability of prediction rules, Imbalanced regression using regressor-classifier ensembles, SUBiNN: a stacked uni- and bivariate \(k\)NN sparse ensemble, On detecting the effect of exposure mixture, Prediction Using Many Samples with Models Possibly Containing Partially Shared Parameters, Bayesian hierarchical stacking: some models are (somewhere) useful, Causal survival analysis under competing risks using longitudinal modified treatment policies, A simple method for combining estimates to improve the overall error rates in classification, Bayesian additive regression trees using Bayesian model averaging, A new methodology for generating and combining statistical forecasting models to enhance competitive event prediction, Nonparametric Causal Effects Based on Longitudinal Modified Treatment Policies, Rejoinder: A nonparametric superefficient estimator of the average treatment effect, Tree ensembles with rule structured horseshoe regularization, Incorporating auxiliary information for improved prediction using combination of kernel machines, A Clusterwise Center and Range Regression Model for Interval-Valued Data, Algorithms for drug sensitivity prediction, Tuning Algorithms for Stochastic Black-Box Optimization: State of the Art and Future Perspectives, Spatial aggregation of local likelihood estimates with applications to classification, A dynamic ensemble approach to robust classification in the presence of missing data, Unnamed Item, Unnamed Item, Additive stacking for disaggregate electricity demand forecasting, Two-level regression method using ensembles of trees with optimal divergence, Cross-validated bagged learning, Bundling classifiers by bagging trees, Remembering Leo Breiman, Node harvest, A multi-loss super regression learner (MSRL) with application to survival prediction using proteomics, Prediction and classification in nonlinear data analysis: something old, something new, something borrowed, something blue, On the use of double cross-validation for the combination of proteomic mass spectral data for enhanced diagnosis and prediction, Modular learning models in forecasting natural phenomena., Ensemble strategies for a medical diagnostic decision support system: A breast cancer diagnosis application, Supervised projection approach for boosting classifiers, Ensemble quantile classifier, Jackknife model averaging, Flexible, boundary adapted, nonparametric methods for the estimation of univariate piecewise-smooth functions, Prequential analysis of complex data with adaptive model reselection, In praise of partially interpretable predictors, An analytical toast to wine: Using stacked generalization to predict wine preference, Aggregating classifiers via Rademacher–Walsh polynomials, KFC: A clusterwise supervised learning procedure based on the aggregation of distances, A nearest-neighbor-based ensemble classifier and its large-sample optimality, Methods and algorithms of collective recognition, On the interpretation of ensemble classifiers in terms of Bayes classifiers, RADE: resource-efficient supervised anomaly detection using decision tree-based ensemble methods, A kernel-based combined classification rule, Hierarchical resampling for bagging in multistudy prediction with applications to human neurochemical sensing, Optimal control of fed-batch processes based on multiple neural networks, Stacking with Dynamic Weights on Base Models, An almost surely optimal combined classification rule, Two semi-parametric empirical Bayes estimators