Pages that link to "Item:Q2321252"
From MaRDI portal
The following pages link to Explanation in artificial intelligence: insights from the social sciences (Q2321252):
Displaying 50 items.
- On cognitive preferences and the plausibility of rule-based models (Q782445) (← links)
- Mathematical optimization in classification and regression trees (Q828748) (← links)
- Editable machine learning models? A rule-based framework for user studies of explainability (Q2022486) (← links)
- The spherical \(k\)-means++ algorithm via local search (Q2039652) (← links)
- Explanation in AI and law: past, present and future (Q2046030) (← links)
- Beneficial and harmful explanatory machine learning (Q2051274) (← links)
- Local and global explanations of agent behavior: integrating strategy summaries with saliency maps (Q2060691) (← links)
- The quest of parsimonious XAI: a human-agent architecture for explanation formulation (Q2060710) (← links)
- Knowledge graphs as tools for explainable machine learning: a survey (Q2060751) (← links)
- Toward an explainable machine learning model for claim frequency: a use case in car insurance pricing with telematics data (Q2066785) (← links)
- A maximum-margin multisphere approach for binary multiple instance learning (Q2077935) (← links)
- Defining formal explanation in classical logic by substructural derivability (Q2117787) (← links)
- SAT-based rigorous explanations for decision lists (Q2118305) (← links)
- A local method for identifying causal relations under Markov equivalence (Q2124443) (← links)
- Relation between prognostics predictor evaluation metrics and local interpretability SHAP values (Q2124455) (← links)
- Non-monotonic explanation functions (Q2145997) (← links)
- Necessary and sufficient explanations for argumentation-based conclusions (Q2146001) (← links)
- Persuasive contrastive explanations for Bayesian networks (Q2146023) (← links)
- Probabilistic causes in Markov chains (Q2147196) (← links)
- Generating contrastive explanations for inductive logic programming based on a near miss approach (Q2163226) (← links)
- Heterogeneous causal effects with imperfect compliance: a Bayesian machine learning approach (Q2170451) (← links)
- Interpreting deep learning models with marginal attribution by conditioning on quantiles (Q2172619) (← links)
- Some thoughts on knowledge-enhanced machine learning (Q2237522) (← links)
- Argumentative explanations for interactive recommendations (Q2238596) (← links)
- Counterfactual state explanations for reinforcement learning agents via generative deep learning (Q2238641) (← links)
- Paracoherent answer set computation (Q2238696) (← links)
- Why bad coffee? Explaining BDI agent behaviour with valuings (Q2238733) (← links)
- Detecting correlations and triangular arbitrage opportunities in the Forex by means of multifractal detrended cross-correlations analysis (Q2296838) (← links)
- Story embedding: learning distributed representations of stories based on character networks (Q2303512) (← links)
- Model transparency and interpretability: survey and application to the insurance industry (Q2677927) (← links)
- Logic explained networks (Q2680793) (← links)
- A machine learning approach to differentiate between COVID-19 and influenza infection using synthetic infection and immune response data (Q2686729) (← links)
- (Q5018520) (← links)
- Learning Optimal Decision Sets and Lists with SAT (Q5026234) (← links)
- Explainable Deep Learning: A Field Guide for the Uninitiated (Q5026262) (← links)
- On Tackling Explanation Redundancy in Decision Trees (Q5041018) (← links)
- Model Uncertainty and Correctability for Directed Graphical Models (Q5052911) (← links)
- (Q5053199) (← links)
- (Q5053212) (← links)
- Formal Methods in FCA and Big Data (Q5054986) (← links)
- A Comprehensive Framework for Learning Declarative Action Models (Q5094057) (← links)
- Exploiting Game Theory for Analysing Justifications (Q5140022) (← links)
- A Survey on the Explainability of Supervised Machine Learning (Q5145841) (← links)
- Witnesses for Answer Sets of Logic Programs (Q5886522) (← links)
- Comments on ``Data science, big data and statistics'' (Q5970962) (← links)
- A \(k\)-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning (Q6067036) (← links)
- The explanation game: a formal framework for interpretable machine learning (Q6067308) (← links)
- Explainable acceptance in probabilistic and incomplete abstract argumentation frameworks (Q6080638) (← links)
- Counterfactuals as modal conditionals, and their probability (Q6080641) (← links)
- Explainable subgradient tree boosting for prescriptive analytics in operations management (Q6087515) (← links)