On Shapley value interpretability in concept-based learning with formal concept analysis
From MaRDI portal
Recommendations
- Interpretable concept-based classification with Shapley values
- On Shapley Value for Measuring Importance of Dependent Inputs
- Games on concept lattices: Shapley value and core
- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
- Machine learning on the basis of formal concept analysis
Cites work
- scientific article; zbMATH DE number 3823743 (Why is no real title available?)
- scientific article; zbMATH DE number 1249514 (Why is no real title available?)
- scientific article; zbMATH DE number 3078997 (Why is no real title available?)
- A note on limit results for the Penrose-Banzhaf index
- Algorithms for computing the Shapley value of cooperative games on lattices
- Approximating concept stability
- Boundary relations, graphs and collective solutions
- Concept Lattices
- Conceptual exploration
- Galois connections in data analysis: Contributions from the Soviet era and modern Russian research
- Games on concept lattices: Shapley value and core
- Interpretable concept-based classification with Shapley values
- Mathematical aspects of concept analysis
- On attribute reduction in concept lattices: methods based on discernibility matrix are outperformed by basic clarification and reduction
- On interestingness measures of formal concepts
- On stability of a formal concept
- On the complexity of calculating factorials
- Towards Concise Representation for Taxonomies of Epistemic Communities
Cited in
(3)
This page was built for publication: On Shapley value interpretability in concept-based learning with formal concept analysis
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2107486)