Pages that link to "Item:Q97217"
From MaRDI portal
The following pages link to All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously (Q97217):
Displayed 25 items.
- iml (Q40680) (← links)
- Testing conditional independence in supervised learning algorithms (Q113672) (← links)
- Robust boosting for regression problems (Q133956) (← links)
- Unrestricted permutation forces extrapolation: variable importance requires at least one more model, or there is no free variable importance (Q2066736) (← links)
- Interpretable machine learning: fundamental principles and 10 grand challenges (Q2074414) (← links)
- Structural importance and evolution: an application to financial transaction networks (Q2096775) (← links)
- Scrutinizing XAI using linear ground-truth data with suppressor variables (Q2163233) (← links)
- Techniques to improve ecological interpretability of black-box machine learning models. Case study on biological health of streams in the United States with gradient boosted trees (Q2163504) (← links)
- Grouped feature importance and combined features effect plot (Q2172623) (← links)
- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values (Q2238680) (← links)
- Prospects for Higgs boson and new scalar resonant production searches in \textit{ttbb} final state at the LHC (Q2698931) (← links)
- Visualizing Variable Importance and Variable Interaction Effects in Machine Learning Models (Q5057087) (← links)
- (Q5214285) (redirect page) (← links)
- Stochastic Tree Search for Estimating Optimal Dynamic Treatment Regimes (Q5857116) (← links)
- The explanation game: a formal framework for interpretable machine learning (Q6067308) (← links)
- Feature importance in neural networks as a means of interpretation for data-driven turbulence models (Q6095918) (← links)
- SLISEMAP: supervised dimensionality reduction through local explanations (Q6097137) (← links)
- Cross-model consensus of explanations and beyond for image classification models: an empirical study (Q6103575) (← links)
- Variable Selection Via Thompson Sampling (Q6107208) (← links)
- Visualizing the Implicit Model Selection Tradeoff (Q6135962) (← links)
- Machine learning meta-models for fast parameter identification of the lattice discrete particle model (Q6164296) (← links)
- Explaining classifiers with measures of statistical association (Q6168907) (← links)
- Improved feature selection with simulation optimization (Q6173794) (← links)
- Considerations when learning additive explanations for black-box models (Q6176233) (← links)
- Interpretable Architecture Neural Networks for Function Visualization (Q6181398) (← links)