This repo currently includes the following documents:
XAI_Course_2025_OJPV.pdf: slides for the XAI course, updated for 2025Captum_Quantus_Tutorial.ipynb: a tutorial on using Captum and Quantus
I also propose three new (although not particularly intersting) attribution methods:
TiledOcclusionis a simple attribution method built upon standard Occlusion and implemented using Captum's interface.FusionGradis an implementation of FusionGrad, that more closely follows Captum's interface.ContrastiveAttributioncomputes the attribution of the selected output feature minus the average of all other output features
You can install directly using pip:
pip install git+https://github.com/OscarPellicer/extra-attributions.git
But it is recommended to install it for development: clone the repository and install in editable mode:
git clone https://github.com/OscarPellicer/extra-attributions.git
cd extra-attributions
pip install -e .
You can use the attribution methods as any other Captum attribution method. E.g.:
from extra_attributions import TiledOcclusion, FusionGrad, ContrastiveAttribution
from captum.attr import IntegratedGradients
# TiledOcclusion
tiled_occlusion= TiledOcclusion(model)
attributions_tocc= tiled_occlusion.attribute(input, target=target, k=[1,2,2], window= [3,18,18])
# FusionGrad
integrated_gradients = IntegratedGradients(model)
fusiongrad= FusionGrad(integrated_gradients, model=model)
attributions_ig_fg= fusiongrad.attribute(input, target=target,
std=0.05, mean=1., n=5, additive_noise=False, #Weight noise (mult)
sg_std=1.5, m=5, sg_additive_noise=True, #Input noise (add)
)
#ContrastiveAttribution (+ NoiseTunnel)
ig = IntegratedGradients(model)
contrastive_ig = ContrastiveAttribution(ig)
noise_tunnel_ig = NoiseTunnel(ig)
noise_tunnel_contrastive_ig = ContrastiveAttribution(noise_tunnel_ig)
noise_tunnel_tocc = NoiseTunnel(TiledOcclusion(model))
noise_tunnel_contrastive_tocc = ContrastiveAttribution(noise_tunnel_tocc)
attributions_ig_nt_c = noise_tunnel_contrastive_ig.attribute(input, nt_samples=10, n_steps=10, nt_type='smoothgrad',
stdevs=1.5, internal_batch_size=5,
target=pred_label_idx)
The idea of the method is to combine the power of bigger occlusion patches while obtaining a high resolution smoother occlusion map, by adding occlusion results from several slightlhy shifted versions of the same input image. If you are using this repository and need more info on the method, open an issue and I will try to improve this description. TiledOcclusion generally produces smoother results than Occlusion (specially at higher resolutions) which also are better aligned with our intution for how attributions should behave.
See Captum_Quantus_Tutorial.ipynb for an example of how to use the method.
A Captum interface to FusionGrad. See Captum_Quantus_Tutorial.ipynb for an example of how to use FusionGrad, and the README.md in the NoiseGrad repo for more details on the method.
It takes any attribution method, and computes the attribution of the selected output feature minus the average of all other output features (or just one, if given as input parameter). It does so by wrapping the model, and hence it is as efficient as computing a standard attribution. See Captum_Quantus_Tutorial.ipynb for an example of how to use.
If you found this useful, simply cite this Github!