Pages that link to "Item:Q97217"
From MaRDI portal
The following pages link to All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously (Q97217):
Displaying 31 items.
- iml (Q40680) (← links)
- Testing conditional independence in supervised learning algorithms (Q113672) (← links)
- Robust boosting for regression problems (Q133956) (← links)
- Unrestricted permutation forces extrapolation: variable importance requires at least one more model, or there is no free variable importance (Q2066736) (← links)
- Interpretable machine learning: fundamental principles and 10 grand challenges (Q2074414) (← links)
- Structural importance and evolution: an application to financial transaction networks (Q2096775) (← links)
- Scrutinizing XAI using linear ground-truth data with suppressor variables (Q2163233) (← links)
- Techniques to improve ecological interpretability of black-box machine learning models. Case study on biological health of streams in the United States with gradient boosted trees (Q2163504) (← links)
- Grouped feature importance and combined features effect plot (Q2172623) (← links)
- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values (Q2238680) (← links)
- Prospects for Higgs boson and new scalar resonant production searches in \textit{ttbb} final state at the LHC (Q2698931) (← links)
- Visualizing Variable Importance and Variable Interaction Effects in Machine Learning Models (Q5057087) (← links)
- (Q5214285) (redirect page) (← links)
- Stochastic Tree Search for Estimating Optimal Dynamic Treatment Regimes (Q5857116) (← links)
- The explanation game: a formal framework for interpretable machine learning (Q6067308) (← links)
- Feature importance in neural networks as a means of interpretation for data-driven turbulence models (Q6095918) (← links)
- SLISEMAP: supervised dimensionality reduction through local explanations (Q6097137) (← links)
- Cross-model consensus of explanations and beyond for image classification models: an empirical study (Q6103575) (← links)
- Visualizing the Implicit Model Selection Tradeoff (Q6135962) (← links)
- Machine learning meta-models for fast parameter identification of the lattice discrete particle model (Q6164296) (← links)
- Explaining classifiers with measures of statistical association (Q6168907) (← links)
- Improved feature selection with simulation optimization (Q6173794) (← links)
- Considerations when learning additive explanations for black-box models (Q6176233) (← links)
- Encoding of data sets and algorithms (Q6546951) (← links)
- Forward stability and model path selection (Q6547741) (← links)
- Total effects with constrained features (Q6547752) (← links)
- Rejoinder to: ``Machine learning applications in non-life insurance'' (Q6578119) (← links)
- Conditional feature importance for mixed data (Q6589373) (← links)
- Customer churn prediction using a novel meta-classifier: an investigation on transaction, telecommunication and customer churn datasets (Q6621847) (← links)
- An illustration of model agnostic explainability methods applied to environmental data (Q6626539) (← links)
- Variable importance evaluation with personalized odds ratio for machine learning model interpretability with applications to electronic health records-based mortality prediction (Q6629964) (← links)