Explainability as statistical inference

Archive ouverte

Senetaire, Hugo Henri Joseph | Garreau, Damien | Frellsen, Jes | Mattei, Pierre-Alexandre

Edité par CCSD -

International audience. A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem. Our method is a case of amortized interpretability models, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularised maximum likelihood for our general model. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map. Using these datasets, we show experimentally that using multiple imputation provides more reasonable interpretations.

Consulter en ligne

Suggestions

Du même auteur

Model-agnostic out-of-distribution detection using combined statistical tests

Archive ouverte | Bergamin, Federico | CCSD

International audience. We present simple methods for out-of-distribution detection using a trained generative model. These techniques, based on classical statistical tests, are model-agnostic in the sense that they...

Negative Dependence Tightens Variational Bounds

Archive ouverte | Mattei, Pierre-Alexandre | CCSD

International audience. Importance weighted variational inference (IWVI) is a promising strategy for learning latent variable models. IWVI uses new variational bounds, known as Monte Carlo objectives (MCOs), obtaine...

not-MIWAE: Deep Generative Modelling with Missing not at Random Data

Archive ouverte | Ipsen, Niels Bruun | CCSD

International audience. When a missing process depends on the missing values themselves, it needs to be explicitly modelled and taken into account while doing likelihood-based inference. We present an approach for b...

Chargement des enrichissements...