Fair and Efficient Alternatives to Shapley-based Attribution Methods
Résumé
Interpretability of predictive machine learning models is crit-
ical for numerous application contexts that require decisions to be un-
derstood by end-users. It can be studied through the lens of local ex-
plainability and attribution methods that focus on explaining a specific
decision made by a model for a given input, by evaluating the contri-
bution of input features to the results, e.g. probability assigned to a
class. Many attribution methods rely on a game-theoretic formulation
of the attribution problem based on an approximation of the popular
Shapley value, even if the underlying rationale motivating the use of
this specific value is today questioned. In this paper we introduce the
FESP - Fair-Efficient-Symmetric-Perturbation - attribution method as
an alternative approach sharing relevant axiomatic properties with the
Shapley value, and the Equal Surplus value (ES) commonly applied in
cooperative games. Our results show that FESP and ES produce better
attribution maps compared to state-of-the-art approaches in image and
text classification settings.
Origine | Fichiers produits par l'(les) auteur(s) |
---|